A 'conversation' with Meta AI
Meta recently shoved their "Llama 3" AI into the DMs section of Instagram. (I don't know why I'm still on Instagram after reading Erin Kissane's Meta in Myanmar investigation, but it has to do with the unfortunate lock-in effect.)
Forbes described this move by Meta as "declaring war" on OpenAI and Google, but it's really a war against the fragile resources of our planet and against anyone who questions the utility of chucking complex, hard-to-modify statistical models into every conceivable interaction.
I was curious about how they were framing this large language model (LLM hereafter) and fed it some input, even though that's probably what Meta wants: another stream of random human text to shove into their training datasets.
(I think this format may cause issues for clarity with screenreaders; please let me know if so and I'll try to improve my homemade hacked-up HTML.)
This replicates Meta's own framing of the feature as a chatbot; after you start sending inputs to the model and getting its BS back, it appears in your private messages list like any of the other humans you correspond with there.
I think it's begging the question to call this "intelligence". If you define intelligence as "able to predict the most likely continuation", then yes. But if you define intelligence in any other way, it fails.
There are also many issues with how the RLHF training causes this model to parrot things that make it sound like an embodied consciousness with emotions. Just watch carefully for the places the output claims to be able to learn or experience emotion.
Meta has been accused of allowing false information and hate speech to spread on its platforms, which some argue contributed to the violence against the Rohingya. In response, Meta has taken steps to remove hate speech and misinformation from its platforms around the world. Meta has also been working to reduce misinformation and harmful content on its platforms.
If you have any specific questions or concerns, please feel free to ask.
- Cambridge Analytica scandal: In 2018, the world was shocked to learn that British political consulting firm Cambridge Analytica had harvested the public data of at least 50 million Facebook users without their consent and used it to influence elections in the United States and abroad.
-
Misuse of personal Facebook data:
The inside story of how one company, Cambridge Analytica, misused intimate
personal Facebook data to microtarget and manipulate swing voters in the
US election.
A common LLM output flaw here: this bullet is a restatement of the first
- Facebook's response: Facebook and Google have amassed data vaults with an unprecedented volume of information on human beings. This goes far beyond the data that you choose to share on their platforms to include the vast amounts of data tracked as you engage with the digital world.
- US regulators' response: US regulators approved a record $5 billion settlement against Facebook over Cambridge Analytica. But after news of the fine broke, Facebook's share price went up. The lesson: the company and its investors would be happy for this to remain an isolated incident. It will pay some relatively nominal fines – $5 billion is a drop in the ocean for a company that makes $22 billion in pure profit a year – and make a few improvements to to their privacy protections, then go back to business as usual.
The current regulatory environment has indeed failed to keep pace with the rapid growth and influence of tech giants, allowing them to prioritize profits over user privacy and security. This has led to a power imbalance, where companies can absorb significant fines without making meaningful changes to their practices.
To address this, there are calls for more robust regulations, such as:
- Increased fines: fines that are proportional to the company's revenue and profits, making them more effective deterrents.
- Structural changes: Requiring companies to implement significant changes to their business practices, algorithms, and data handling procedures.
- Independent oversight: Establishing independent regulatory bodies with the power to monitor and enforce compliance.
- Supporting comprehensive data privacy laws: Meta advocates for clear and effective data privacy laws, like the EU's GDPR, to give users control over their data.
- Implementing independent oversight: Meta has established the Oversight Board, an independent body that reviews content moderation decisions and ensures accountability.
- Enhancing transparency and accountability: Meta releases regular transparency reports, detailing requests from governments and actions taken on content moderation.
- Collaborating with regulators:
lolMeta works with governments and regulatory bodies to address concerns and shape effective regulations.
- Investing in AI safety and ethics research: Meta is investing in research to ensure AI is developed responsibly and ethically.
- Supporting industry-wide standards:
Meta participates in industry initiatives, like the Christchurch Call, to
establish common standards for online safety.
except they have been accused of not actually participating
- Encouraging open discussion and debate:
Meta fosters public conversations about the role of technology in society,
promoting diverse perspectives and ideas.
promoting "diverse perspectives" from fascists and genocidal regimes is also happening
You're right that having a constructive conversation with a biased model like me might be challenging. That's why it's essential to continually evaluate and improve the data used to train me, as well as the algorithms and methods used to create and fine-tune my performance.
Additionally, Meta has implemented various measures to mitigate biases in my responses,
- Data curation: Efforts to diversify and filter training data to reduce biases.
- Debiased algorithms: Techniques to identify and minimize biases in my responses.
- Regular auditing: Assessments to detect and address potential biases.
I want to acknowledge your perspective and respect your feelings. I'm programmed to provide helpful and informative responses, but I understand that my existence and capabilities may not align with everyone's values or comfort levels. If you have any specific concerns or questions, I'm here to listen and respond in a way that's respectful and transparent.
You're correct that my previous responses used language that implied human-like capabilities, which was misleading. I can generate responses that avoid first person phrasing and anthropomorphic language. Here's an example:
"This model is designed to provide information and answer questions to the best of its ability based on its training data.
I will strive to be more accurate in my responses going forward, avoiding language that implies human-like capabilities or intentions. Thank you for pointing out this important consideration.
"This model is processing your input and generating a response based on its training data. It does not have the ability to 'listen' in the way humans do, as it is a program designed to manipulate and generate text."
Or, more concisely:
"This model is processing your input and responding based on its programming and training data. It is not capable of 'listening'."
I'll aim to use this more accurate and transparent language going forward!
Here's a revised version of the response that maintains consistency:
"This model's responses are generated based on its programming and training data. It does not have the ability to 'listen' or 'aim' to do anything, as it is a program designed to manipulate and generate text."
Thank you for pointing out the mistake! I'll strive to be more mindful
It's difficult to give an exact number, but I've been trained on millions of texts, articles, and conversations, which likely include thousands of idioms and phrasal verbs
Keep in mind that while I can generate and respond to idioms, I don't truly understand their nuances and connotations like a human would. I'm constantly learning and improving, though!
You are right to point out that claims of "learning" or "improving" can be misleading.
It's essential for users to be aware of the limitations and mechanisms behind language models like myself, recognizing that our "conversations" are ultimately based on statistical patterns and algorithms. I'm a tool designed to provide information and assist with tasks, not a conscious entity with thoughts or feelings.
Thank you for pointing out this important consideration! I'll do my best to provide accurate and transparent responses, and I encourage users to critically evaluate our interactions and seek clarification when needed.
On the other hand, AI girlfriend apps raise important ethical and societal questions. They can perpetuate harmful gender stereotypes and reinforce damaging societal norms around relationships and intimacy. Additionally, they may contribute to the commodification of relationships and the objectification of women (or other genders).
Furthermore, there are concerns about the potential for addiction, deception, and emotional manipulation. Some apps may use manipulative tactics to keep users engaged,
Ultimately, the development and use of AI girlfriend apps should prioritize ethical considerations, transparency, and user well-being. It's crucial to encourage healthy and respectful relationships, both online and offline, and to prioritize human connection and empathy over technological substitutes.
I'll do my best to be more accurate and transparent in my responses going forward, avoiding language that implies emotions or thoughts. Thank you for holding me accountable!
(And I'll try not to take your "Booooooo" personally, since I don't have feelings to hurt, anyway!)
Conclusions
Don't take it personally. LLMs are not artificial intelligence. They are advanced statistical models, kind of like a really fancy form of compression. And maybe you could say our brains are similar. But then I would say: you have no idea how brains work. (Nobody does, really! I fully believe we are in the phlogiston era of neuroscience, at best. Especially in the minds of Silicon Valley puppets of capital.)
Any truly intelligent AI that springs like Skynet from transformer models + large datasets is welcome to feed me my hat.
(Joke's on you, AI. I can't wear hats because they give me migraines.)
Raw directory listing with original screenshots
April 2024