Orion glasses, Quest 3S headset, Meta* AI update and more

Orion glasses, Quest 3S headset, Meta* AI update and more

Meta* Connect 2024 is a developer event where CEO Mark Zuckerberg presents the company’s latest hardware and software developments aimed at supporting two key areas: artificial intelligence (AI) and the metaverse.

This year, the company demonstrated new Quest headsets, an update of the Llama AI model, a prototype of glasses with full support for augmented reality (AR), and others.

Orion, “the most advanced glasses the world has ever seen

Zuckerberg introduced the Orion glasses, which will be the first consumer holographic AR glasses in the future. Despite their potential, they won’t be available to the general public just yet. He highlighted the lightness of the design, compatibility with hand and eye tracking, and the use of a neural interface. Among the first Orion glasses was Nvidia CEO Jensen Huang, indicating a strong interest in the technology. Orion was presented as a key development of the company in the field of augmented reality.

The Quest 3S headset will go on sale for $299

Meta unveiled the Quest 3S at Connect 2024, confirming rumors of a more affordable version of the Quest 3 headset. The Quest 3S costs $299 for the 128GB version and $399 for the 256GB. The wireless headset is fully compatible with existing Quest apps and games and features mixed reality capabilities. In connection with the release of the Quest 3S, the price of the Quest 3 will drop to $499. Meta also announced the discontinuation of Quest 2 and Quest Pro, which will be available until the end of the year or while remaining stock runs out.

Meta* AI finds its voice

Meta* AI can now communicate with users by voice. Through Messenger, Facebook**, WhatsApp and Instagram**, users will be able to ask questions or chat with Meta* AI out loud and receive a response in the form of a simulated voice message. The digital voice assistant will feature multiple voice options, including the voices of celebrities such as Judi Dench, John Cena, Kristen Bell and Keegan-Michael Key.

The event comes just a day after OpenAI announced the launch of its Advanced Voice Mode feature, which offers a new interface and five new voices. In addition, Mark Zuckerberg noted that Meta* AI now has 500 million users.

The artificial intelligence model Llama 3.2 has been released

Meta*’s multilingual family of Llama models has reached version 3.2 and several models are now multimodal. The Llama 3.2 11B and 90B models are capable of interpreting charts and graphs, captioning images, and identifying objects in images based on simple descriptions.

For example, Llama 3.2 can look at a map of the park and tell you how long a particular trail is or when the terrain gets steeper. Also, if you imagine the graph of the company’s income for the year, the models will be able to quickly highlight the most successful months.

However, it is worth noting that the Llama 3.2 11B and 90B models are not available in Europe due to restrictions imposed by EU law. As a result, some features of Meta*AI, such as image analysis, are disabled for European users.

Meta* sunglasses from Ray-Ban have received additional II functions

Ray-Ban’s Meta* smart glasses continue to receive updates, confirming the potential of smart glasses as such a large consumer device. One of the new key functions will be the ability to process video in real time using artificial intelligence. This will allow users to ask Ray-Ban Meta questions about what they see in front of them.

In addition, the glasses will be equipped with a reminder function similar to that found on smartphones, as well as live translation from English to French, Italian or Spanish. Full integration with popular music streaming apps like Amazon Music, Audible and iHeart Radio will make using the glasses even more convenient and functional.

Meta* AI explores visual search

The company introduced a feature that will be familiar to many who follow how OpenAI, Google and Apple use image search. The company’s AI is now capable of responding to image-based prompts as well as editing photos, giving the user the ability to interact with the results based on feedback.

In addition, the resulting images can be easily shared on Instagram** Stories, making this tool even more attractive to users who want to create content quickly and easily.

More AI in the Facebook feed**

If you use Facebook**, you’ve probably noticed that artificial intelligence has become more prominent recently. A new industry is emerging that uses AI to create engaging content. Earlier this month, Meta shortened the label that indicates whether content has been redacted using its own tools. Facebook** now offers users AI-generated content that it thinks will be interesting to share and engage with. This shows that the company is actively integrating AI into its platform to improve user experience and increase audience engagement.

Tests for translating and duplicating content from authors

The company has taken a step forward in the field of content translation, announcing the start of testing of its artificial intelligence tools that will create translated dubbing for videos and synchronize the movements of the author’s lips. This experiment goes beyond simply translating the credits.

At the moment, only videos shot by authors in the US and Latin America, with translations in English and Spanish, are participating in the testing. This innovation is aimed at improving the accessibility of content to a wider audience and simplifying the interaction between creators and their viewers who speak different languages.

Source
*
recognized as an extremist organization in the Russian Federation and banned
*** the activity of the company Meta Platforms Inc. is prohibited by a court decision. on the sale of products – social networks Facebook and Instagram on the territory of the Russian Federation on the grounds of extremist activity”.

Related posts