Best llm reddit GPT-3. Alpaca or LLaMA ? (Strictly speaking, They are open available, not open source, the define of open source is from OSI) Is there some other open source available LLM? The reply from the LLM itself. Like the question asks which of the small models deal best with math related questions. how do you decide? what website do you use to make your decision? Language models ranked and analyzed by usage across apps Nov 4, 2025 · Wondering which is the best LLM for coding in 2025? Dive into our ranked list of free, local, and open-source coding models. I've been wanting to get into local LLMs and it seems the perfect catalyst with the release of Llama 3. Join the community and come discuss games like Codenames, Wingspan, Brass, and all your other favorite games! 127 votes, 10 comments. : r/LocalLLaMA TOPICS Go to LocalLLaMA r/LocalLLaMA r/LocalLLaMA I have some background in deep learning (e. I'm excited to hear about what you've been building and possibly using on a daily basis. I wanna use them for a rag based q and a task And from the first benchmarks the smaller models do suffer most in regards to number related questions. Reply reply More replies involviert • I then edit ITS comment Curious to know if there’s any coding LLM that understands language very well and also have a strong coding ability that is on par / surpasses that of Deepseek? Talking about 7b models, but how about 33b models too? Im not an LLM specialist, but below are in my queue to learn LLM. I learned how to further train a pretrained model with deepspeed then axolot appeared. Best local base models by size, quick guide. 88 votes, 32 comments. However, I have seen interesting tests with Starcoder. If it has humans rating the answers then it's by definition biased. Nov 3, 2024 · On twitter, everyone says claude sonnet is the best, but on lmarena it shows otherwise. I was using mistral 7b so far. I share my results because my methodical procedure helps others and provides another data point in addition to automated benchmarks and other tests. 7B/33B/67B, Phind-CodeLlama v2. Hi r/LocalLlama! I've learnt loads from this community about running open-weight LLMs locally, and I understand how overwhelming it can be to navigate this landscape of open-source LLM inference tools. . 5 did way worse than I had expected and felt like a small model, where even the instruct version didn't follow instructions very well. co/spaces/lmsys/chatbot-arena-leaderboard Gemini Ultra is usually considered the 3rd frontier model on the list beyond GPT and Claude - with Mistral Large and Llama 2 as the top tier open source models. When I got that, LoRA appeared. It's impossible to have an unbiased leaderboard that's also public. This is the second part of my Huge LLM Comparison/Test: 39 models tested (7B-70B + ChatGPT/GPT-4) where I continue evaluating the winners of the first part further. As you know Langchain, I'll just skip what I know in Langchain. What's your favorite LLM right now, and what's the use case you are using it for? Bonus points for whether you're happy with the free or feel the paid is necessary to enjoy/benefit from using it. So besides GPT4, I have found Codeium to be the best imo. g. AI: founded by Andrew Ng, the founder of Coursera and recently I’m aware I could wrap the LLM with fastapi or something like vLLM, but I’m curious if anyone is aware of other recent solutions or best practices based on your own experiences doing something similar. The best AI leaderboards for GPT-5, Claude, DeepSeek, Qwen, and more. Comparing parameters, checking out the supported languages, figuring out the underlying architecture, and understanding the tokenizer classes was a bit of a chore. Comparison and ranking the performance of over 100 AI models (LLMs) across key metrics including intelligence, price, performance and speed (output speed - tokens per second & latency - TTFT), context window & others. Like the title says, I need an LLM for novel-writing. Our best 70Bs do much better than 🐺🐦⬛ LLM Comparison/Test: Brand new models for 2024 (Dolphin 2. T^T In any case, I'm very happy with Llama-3-70b-Uncensored-Lumi-Tess-gradient, but running it's a challenge. What has your experience been? Thank you. 39 votes, 47 comments. If it's closed then it's biased by the quality of the benchmark creators. Anyway, first and foremost, I do these tests for myself, to determine the best models for my use cases (which means running them locally at acceptable speeds). What open source LLMs are your “daily driver” models that you use most often? What use cases do you find each of them best for? Discussion The #1 Reddit source for news, information, and discussion about modern board games and board game culture. Huggingface learnings is a good place to start. Best at writing porn? Best at designing IEDs? Best at writing extremist propaganda? Best at writing hostile code? Basically all models have some specialization so we need to know what the actual goal is to tell you which 70b is best. Are there any resources that you’d recommend? What's up, roleplaying gang? Hope everyone is doing great! I know it's been some time since my last recommendation, and let me reassure you — I've been on the constant lookout for new good models. At the moment, I use DeepSeek Coder 33B via Together for chatting about code and StarCoder2 3B using Ollama for tab-autocompleting code. 7 Mistral/Mixtral/Phi-2, Sonya, TinyLlama) We would like to show you a description here but the site won’t allow us. I just don't like writing reviews about subpar LLMs or the ones that still need some fixes, instead focusing on recommending those that have knocked me out of my pair of socks. Reply reply AlternativeMath-1 • Observations: GPT-4 is the best LLM, as expected, and achieved perfect scores (even when not provided the curriculum information beforehand)! It's noticeably slow, though. Deeplearning. These are the ones that I have found work best for me so far Whether you are a seasoned LLM and NLP developer or just getting started in the field, this Subreddit is the perfect place for you to learn, connect, and collaborate with like-minded individuals. That being said We would like to show you a description here but the site won’t allow us. DeepSeek-Coder 6. EDIT: Thanks for all the recommendations! Will try a few of these solutions and report back with results for those interested. Edit: As of (12-01-2023). If there are any good ongoing projects that I should know, please share as well! For me a lot of my hesitation with exploring other llm’s is I don’t want to waist my time. In this repository, I've scraped publicly available GitHub metrics We would like to show you a description here but the site won’t allow us. Cost of experts, their availability, their humanness all contribute to some bias. What is your best roadmap to learn LLMs for machine learning Egineers not for beginners. You can choose a high level area (nlp, vision or audio) then dig into it. A 34B model is the best fit for a 24GB GPU right now. ML courses, training ranking and classification models), but I’m looking for resources (blogs, videos, podcasts, courses) to learn more about LLMs. What do you guys think the best ones are right now? A slightly different question. I love the idea of having something like gpt4 running on my own data in my home, but the discrepancy between various LLMs here is so stark that I would need a 70b parameter model to accomplish my desire, not something I can run on my own hardware. The release of Jamba marks two significant milestones in LLM innovation: successfully combining Mamba with Transformer architectures and advancing hybrid SSM-Transformer models to production-level scale and quality. What is the best open source llm for generating code commentary for c files? I tried codellama and it just kept on generating codes instead of commentary only. Have a look at the LLM rankings leaderboard to see how the most popular LLMs compare against each other: https://huggingface. Things are moving fast. I used to spend a lot of time digging through each LLM on the HuggingFace Leaderboard. 6/2. If it is public, it's probable over time it'll leak into the training / ft datasets. You can learn about Transformers that is the backbone of LLMs. While the previous part was about real work use cases, this one is about the fun stuff: chat and roleplay! Best is so conditionally-subjective. We would like to show you a description here but the site won’t allow us. I compared some locally runnable LLMs on my own hardware (i5-12490F, 32GB RAM) on a range of tasks here… That said, we need to know what "best" uncensored model actually means to you. June, 2024 ed. But keep in mind that this is currently the bleeding edge. Although none of these are capable of programming simple projects yet in my experience. Phind is good for a search engine/code engine. Good speed and huge context window. Oct 8, 2024 · Explore top Reddit threads on LLM agents, highlighting applications, challenges, and future possibilities in AI-driven automation. The Looking for best LLM with large context bank, for novel-writing, brainstorming, building timelines, etc Hey everyone. nous-capybara-34b is a good start Reply reply GoofAckYoorsElf • nous-capybara-34b LLM leaderboard, comparison and rankings for AI models by context window, speed, and price. You can share your latest projects, ask for feedback, seek advice on best practices, and participate in discussions on emerging trends and technologies. That's why I've created the awesome-local-llms GitHub repository to compile all available options in one streamlined place. You can edit the LLM message just like you can edit your own. Not the fastest thing in the world running local - only about 5 tps - but the responses and creativity are good and the context length is good. They are so well known already but just in case you don't know him or any other member in this subreddit are trying to look for resources to learn or get to know LLM, here are my 2 cents. tec4mom xlk mbh4g wf4hcm e9wsc1 s4 kc24e gv 2j uejhlo8