Howdy, wizards. Here’s what’s brewing in AI this week:
⚫️ Tool Picks Our latest top-pick AI tools
🟣 Dario’s Picks
- Four highly capable text to video models launched in the past weeks: VEO, Kling and Dream Machine. No launch date yet for OpenAI’s Sora.
- Nvidia’s Nemotron-4 makes AI training cheaper and easier. And it aligns perfectly with their strategy.
- Microsoft retires their GPT builder. Makes sense, it was basically just a copy of OpenAI’s GPT store, but with a fraction of the users.
- OpenAI sheds light on how they handle user privacy. Gotta prepare the masses for ChatGPT coming to Siri later this year.
🟢 GPTs Top newcomers on whatplugin.ai
Tool picks
The best AI tools we’ve tried recently. We personally test and review AI tools for the tasks you need them the most. Read about how we test & review.
Previous top picks / 🥇 SciSpace Academic research 🥇 Gamma Presentations 🥇 Gizmo Flashcards
Dario’s Picks
1. Four new Sora challengers: Gen-3, VEO, Kling and Dream Machine
While OpenAI’s Sora still has no release date, three new text-to-video generation models have been released or announced in the last three weeks:
- Google Deepmind’s VEO. Announced at Google’s recent IO conference. Can take pure text prompts, reference images or a combination to generate clips. Check out some demos here. Still not released, but you can join the waitlist to try some of VEO’s features.
- KWAI’s KLING. Chinese Sora competitor, announced 1.5 weeks ago and its demos has been a hit on social media. It looks nearly as good as Sora. Currently on waitlist and only available in China.
- Luma Labs’ Dream Machine. Can create realistic 2-min videos from text prompts and image-to-video. Seems to be on the level of leading video generators Runway and Pika. It’s already out and you can test it here.
Why it matters These launches could put some pressure on OpenAI to accelerate the launch of Sora to avoid getting left behind, but perhaps not too much; OpenAI has a major ace up their sleeve in terms of distribution to consumers when they do launch, if they decide to bundle Sora with ChatGPT.
2. Nvidia’s Nemotron-4 makes AI training cheaper and easier
Nvidia’s latest model, Nemotron-4 340B, is a family of open models that developers can use to generate synthetic data for training models with industry applications across healthcare, finance, manufacturing and retail.
With this model, Nvidia is helping solve the problem of high-quality data for training industry-specific models being very expensive and difficult to access. It’s a free, scalable way to enable building powerful new LLMs.
Why it matters Nvidia is the world’s hottest company right now (just passed Apple in valuation) since all the different AI developers rely on them for their chips, akin to selling picks and shovels during the gold rush. The move into creating LLMs that support the training of other LLMs aligns perfectly with their strategy.
3. Microsoft retires their GPT builder
Microsoft is removing the ability to create GPTs in Copilot for consumers starting July 10th, as well as removing all user-created GPTs. They’ll shift focus on GPTs to Commercial/Enterprise plans.
The company cites a shift in strategy for consumer Copilot and prioritising their core product. If you’ve built GPTs with Copilot, you still have a few weeks to save your data and recreate it as GPT in ChatGPT. OpenAI’s GPT store and the ability to create GPTs in ChatGPT is not affected.
Why it matters Copilot GPTs seemed to have gained only a small fraction of the usage of OpenAI’s GPT store – I think it makes sense not having two identical concepts under different platforms. Microsoft are still continuing GPTs for Enterprise uses though, which is understandable, as its an easy way for businesses (like Moderna) to securely automate parts of their workflow without the need for developers or a big time investment.
4. OpenAI sheds light on how they handle user privacy
OpenAI just published a post dedicated to explaining how they handle user privacy and how you can manage your data. The company emphasise their wish to have their models learn about the world, and not private individuals. If you don’t want OpenAI to train on your data, here’s what you do:
- Permanently: In ChatGPT you can opt out of model training. You do this by going to Settings > Data Controls > disable “Improve the model for everyone.”
- Temporarily: If you select “temporary chat” in model selection dropdown, the current chat won’t appear in your history or be used to train OpenAI’s models.
Why it matters OpenAI is addressing concerns around user privacy ahead of the integration of ChatGPT in Siri later this year. Elon Musk was quick to his keyboard after the new partnership was launched, claiming Apple has “no idea” what happens to your data once its sent to ChatGPT.
GPTs
Top newcomers on whatplugin.ai’s top 1,000 list of GPTs in the last week. Read about how rankings work.