So, you think GPT-4 can't make a complex game... think again!
Here's how I used GPT-4,
@Replit
, MidJourney, Claude, assembling a team of AI assistants, to create a 3D space runner from scratch with ZERO knowledge of Javascript or 3D game programming
Follow along for a saga! 🧵
🐼03/11 Feature Improvements 🚀
- Improve accuracy for QA
- Instructional messages for each file
- Improved file upload experience with one-step selection and upload
- Fallback for questions that doesn't have context
- Added Stripe payment
Try it out❤️:
Gif-PT
Make a gif. Uses Dalle3 to make a spritesheet, then code interpreter to slice it and animate. Includes an automatic refinement and debug mode..
Use Dalle to draw images turning the user request into:
Item assets sprites. In-game sprites
A sprite
ChatLLaMA - an open-source implementation of LLaMA based on RLHF.
Claims a 15x faster training process than ChatGPT. It allows users to fine-tune personalized ChatLLaMA assistants.
可能很多人跟我一样好奇为什么是“大”语言模型,为什么语言模型的参数从百万到千万到千亿,并且还在增加,依据是什么。这里是 OpenAI 三年前发布的指导性论文 Scaling Laws for Neural Language Models 完整研究了模型性能跟:参数、数据集规模、算力以及迁移能力之间的关系。
Pressure Testing GPT-4-128K With Long Context Recall
128K tokens of context is awesome - but what's performance like?
I wanted to find out so I did a “needle in a haystack” analysis
Some expected (and unexpected) results
Here's what I found:
Findings:
* GPT-4’s recall
🔥Big news from Chatbot Arena: Meet our new MT-Bench leaderboard & Vicuna-33B!
We present a comprehensive, scalable, and validated leaderboard differentiating across open (Falcon, Wizard & Guanaco) and proprietary models (GPT-4, Claude & PaLM).
Blog post:
坐在缆车上听
@onenewbite
的 23 年回顾,听到年初 Tesla 最低点时网友的质疑还有 SPY 的 safe bet 。阳光照在身上觉得贼自在。More information doesn’t always help, it’s the knowledge, vision and belief.
"Friday morning in Brooklyn, New York"🗽
Using AE and Generative Fill to transform a static Midjourney pic into an animated one (see comments)
Time spent: 16 hours 10 minutes. Complexity 9/10
Sound is a MUST 🔊
Our new model Claude 2.1 offers an industry-leading 200K token context window, a 2x decrease in hallucination rates, system prompts, tool use, and updated pricing.
Claude 2.1 is available over API in our Console, and is powering our chat experience.
Meet MPT-30B, the latest member of
@MosaicML
's family of open-source, commercially usable models. It's trained on 1T tokens with up to 8k context (even more w/ALiBi) on A100s and *H100s* with big improvements to Instruct and Chat. Take it for a spin on HF!
Introducing ChatGPT Enterprise: enterprise-grade security, unlimited high-speed GPT-4 access, extended context windows, and much more. We’ll be onboarding as many enterprises as possible over the next few weeks. Learn more:
OpenLLaMA 13B Released
model:
present a permissively licensed open source reproduction of Meta AI's LLaMA large language model. We are releasing 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA