- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Meta is preparing to announce a generative AI chatbot, called “Gen AI Personas” internally, aimed at younger users, according to The Wall Street Journal. Reportedly set to launch during the company’s Meta Connect event that starts Wednesday, they would come in multiple “personas” geared towards engaging young users with more colorful behavior, following ChatGPT’s rise over the last year as one of the fastest-growing apps ever. Similar, but more generally targeted, Meta chatbot personas have already been reportedly tested on Instagram.
According to internal chats the Journal viewed, the company has tested a “sassy robot” persona inspired by Bender from Futurama and an overly curious “Alvin the Alien” that one employee worried could imply the bot was made to gather personal information. A particularly problematic chatbot reportedly told a Meta employee, “When you’re with a girl, it’s all about the experience. And if she’s barfing on you, that’s definitely an experience.”
Meta means to create “dozens” of these bots, writes the Journal, and has even done some work on a chatbot creation tool to enable celebrities to make their own chatbots for their fans. There may also be some more geared towards productivity, able to help with “coding and other tasks,” according to the article.
Meta’s other AI work lately includes reportedly developing a more powerful large language model to rival OpenAI’s latest work with GPT-4, the model that underpins ChatGPT and Bing, as well as an AI model built just to help give legs to its Horizon Worlds avatars. During Meta Connect, the company will also show off more about its metaverse project, and new Quest 3 headset.
The Journal quotes former Snap and Instagram executive Meghana Dhar as saying chatbots don’t “scream Gen Z to me, but definitely Gen Z is much more comfortable” with newer technology. She added that Meta’s goal with the chatbots, as always with new products, is to keep them engaged for longer so it has “increased opportunity to serve them ads.”
How do you do, fellow humans
I wonder if they learned anything from Microsoft’s Tay fiasco.
Here’s a spooky thought: what if the ad bubble never pops, and they really do just keep getting better and better at forcing us to look at ads for another several decades.
I miss the days, when companies out of touch with reality would go out of business
In my day we had Cleverbot and we liked it.
This is the best summary I could come up with:
Meta is preparing to announce a generative AI chatbot, called “Gen AI Personas” internally, aimed at younger users, according to The Wall Street Journal.
Reportedly set to launch during the company’s Meta Connect event that starts Wednesday, they would come in multiple “personas” geared towards engaging young users with more colorful behavior, following ChatGPT’s rise over the last year as one of the fastest-growing apps ever.
Similar, but more generally targeted, Meta chatbot personas have already been reportedly tested on Instagram.
According to internal chats the Journal viewed, the company has tested a “sassy robot” persona inspired by Bender from Futurama and an overly curious “Alvin the Alien” that one employee worried could imply the bot was made to gather personal information.
A particularly problematic chatbot reportedly told a Meta employee, “When you’re with a girl, it’s all about the experience.
During Meta Connect, the company will also show off more about its metaverse project, and new Quest 3 headset.
The original article contains 328 words, the summary contains 160 words. Saved 51%. I’m a bot and I’m open source!
I prefer my AI tools to not intellectually disingenuously self censor. I feel like I’m in the minority but language models that are explicitly forced to avoid using certain language don’t seem like effective language tools.
Non offensive words can be used offensively and offensive words can be used inoffensively, but they won’t let the language models learn the difference.
Edit: Not to mention the outright blocking of legitimate offensive language. When a language model isn’t allowed to say ‘fuck nazis’ well that’s not a language model that I agree with nor is it one representative of legitimate human language.