Skip to content
View juvi21's full-sized avatar

Sponsoring

@teknium1

Block or report juvi21

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
juvi21/README.md

Hello Friend 👋

If you believe there's a paper, problem, or project, especially in the domains of AI, CTF, or Competitive Programming that could benefit from an open-source implementation, I'd love to hear from you! Please use this form link to share your ideas or reach out to me directly via mail or Discord.

Projects

  • self-compressing-nn-jax JAX Implementation of the paper https://arxiv.org/pdf/2301.13142 ported from gehots tinygrad version. repo
  • llama.jl Inference Llama 2 in one file of pure C. Nahh wait, now fresh in Julia! repo
  • CoPE-cuda Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719 repo
  • Competitive-Coding-Benchmark Benchmark LLMs against a set of high quality representative competitive coding problems sourced from cses.fi repo
  • nngdb An terminal based neural-network Interpretability-tool that feels like gdb repo
  • cudatiger An accelerated implementation of the Tiger optimizer for PyTorch, supercharged with Triton for enhanced CUDA GPU efficiency. More Memory efficient than Adam. https://kexue.fm/archives/9512 repo
  • tinyfep A simplified and abstracted rendition of FEP+ (Free Energy Perturbation) for molecular simulations. https://www.schrodinger.com/products/fep repo
  • betterwords.lol Expand human vocabulary with gpt-3.5-turbo or finetuned llama-2 in React link

Models

Datasets

Pinned Loading

  1. llama2.jl llama2.jl Public

    Forked from karpathy/llama2.c

    Inference Llama 2 in one file of pure C. Nahh wait, now fresh in Julia!

    Python 21 1

  2. CoPE-cuda CoPE-cuda Public

    Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719

    Python 20

  3. self-compressing-nn-jax self-compressing-nn-jax Public

    Implementation of Self-Compressing Neural Networks in JAX https://arxiv.org/pdf/2301.13142 ported from tinygrad https://github.com/geohot/ai-notebooks/blob/master/mnist_self_compression.ipynb

    Python

  4. init.vim init.vim
    1
    call plug#begin()
    2
    
                  
    3
    " UI and Themes
    4
    Plug 'drewtempelmeyer/palenight.vim'
    5
    Plug 'morhetz/gruvbox'
  5. cudatiger cudatiger Public

    An accelerated implementation of the Tiger optimizer for PyTorch, supercharged with Triton for enhanced CUDA GPU efficiency. More Memory efficient than Adam. https://kexue.fm/archives/9512

    Python 2

  6. Competitive-Coding-Benchmark Competitive-Coding-Benchmark Public

    Benchmark LLMs against a set of high quality representative competitive coding problems sourced from cses.fi

    Python 2