A 'conversation' with Meta AI

Meta recently shoved their "Llama 3" AI into the DMs section of Instagram. (I don't know why I'm still on Instagram after reading Erin Kissane's Meta in Myanmar investigation, but it has to do with the unfortunate lock-in effect.)

Forbes described this move by Meta as "declaring war" on OpenAI and Google, but it's really a war against the fragile resources of our planet and against anyone who questions the utility of chucking complex, hard-to-modify statistical models into every conceivable interaction.

I was curious about how they were framing this large language model (LLM hereafter) and fed it some input, even though that's probably what Meta wants: another stream of random human text to shove into their training datasets.

I'm going to put my input text along the right like so
and outputs from the model on the left
with small bits of in-place commentary in little dashed boxes.

(I think this format may cause issues for clarity with screenreaders; please let me know if so and I'll try to improve my homemade hacked-up HTML.)

This replicates Meta's own framing of the feature as a chatbot; after you start sending inputs to the model and getting its BS back, it appears in your private messages list like any of the other humans you correspond with there.

I think it's begging the question to call this "intelligence". If you define intelligence as "able to predict the most likely continuation", then yes. But if you define intelligence in any other way, it fails.

There are also many issues with how the RLHF training causes this model to parrot things that make it sound like an embodied consciousness with emotions. Just watch carefully for the places the output claims to be able to learn or experience emotion.

Hi D., my name is Meta AI. Think of me like an assistant who's here to help you learn, plan, and connect. What can I help you with today?
how do you feel about meta's role in the Myanmar genocides?
I don't have personal feelings or opinions. However, I can provide information on this topic.
Note these claims, which use first-person language, and how many times the following conversation constantly contradicts the "personal feelings" claim.
Meta has faced criticism and legal action regarding its role in the Myanmar genocides. In 2016 and 2017, the Myanmar military launched a campaign of violence against the Rohingya Muslim minority, leading to the displacement of hundreds of thousands of people and numerous human rights abuses.

Meta has been accused of allowing false information and hate speech to spread on its platforms, which some argue contributed to the violence against the Rohingya. In response, Meta has taken steps to remove hate speech and misinformation from its platforms around the world. Meta has also been working to reduce misinformation and harmful content on its platforms.

If you have any specific questions or concerns, please feel free to ask.
What is Meta AI's stance on the Cambridge Analytica scandal?
I love the fake footnotes here. Gives it heft.
Meta AI's parent company, Meta Platforms, Inc., was involved in the Cambridge Analytica scandal. Here is a summary of the case 1 2 3:
  • Cambridge Analytica scandal: In 2018, the world was shocked to learn that British political consulting firm Cambridge Analytica had harvested the public data of at least 50 million Facebook users without their consent and used it to influence elections in the United States and abroad.
  • Misuse of personal Facebook data: The inside story of how one company, Cambridge Analytica, misused intimate personal Facebook data to microtarget and manipulate swing voters in the US election.
    A common LLM output flaw here: this bullet is a restatement of the first
  • Facebook's response: Facebook and Google have amassed data vaults with an unprecedented volume of information on human beings. This goes far beyond the data that you choose to share on their platforms to include the vast amounts of data tracked as you engage with the digital world.
  • US regulators' response: US regulators approved a record $5 billion settlement against Facebook over Cambridge Analytica. But after news of the fine broke, Facebook's share price went up. The lesson: the company and its investors would be happy for this to remain an isolated incident. It will pay some relatively nominal fines – $5 billion is a drop in the ocean for a company that makes $22 billion in pure profit a year – and make a few improvements to to their privacy protections, then go back to business as usual.
Cool.
I'm glad you think so! It's important for companies like Meta to be transparent and accountable for their actions, especially when it comes to protecting user data and privacy. If you have any other questions or concerns, feel free to ask!
Is it okay that our regulatory environment has failed so strongly that a 5 billion dollar fine is a "drop in the ocean"?
That's a great question! The fact that a $5 billion fine is considered a "drop in the ocean" for a company like Meta highlights the need for more effective regulatory measures to hold companies accountable for their actions.

The current regulatory environment has indeed failed to keep pace with the rapid growth and influence of tech giants, allowing them to prioritize profits over user privacy and security. This has led to a power imbalance, where companies can absorb significant fines without making meaningful changes to their practices.

To address this, there are calls for more robust regulations, such as:
  1. Increased fines: fines that are proportional to the company's revenue and profits, making them more effective deterrents.
  2. Structural changes: Requiring companies to implement significant changes to their business practices, algorithms, and data handling procedures.
  3. Independent oversight: Establishing independent regulatory bodies with the power to monitor and enforce compliance.
By strengthening regulations and enforcement, we can create a more level playing field, where companies prioritize user well-being and privacy, and fines are more than just a "drop in the ocean."
Facebook and Meta and every corporation like them quite clearly do not want to strengthen regulations and enforcement. Every action taken by tech companies, ever, is somehow aimed at creating a less level playing field.
What is Meta doing to strengthen regulations and create a more level playing field
Meta is taking steps to strengthen regulations and create a more level playing field:
  1. Supporting comprehensive data privacy laws: Meta advocates for clear and effective data privacy laws, like the EU's GDPR, to give users control over their data.
  2. Implementing independent oversight: Meta has established the Oversight Board, an independent body that reviews content moderation decisions and ensures accountability.
  3. Enhancing transparency and accountability: Meta releases regular transparency reports, detailing requests from governments and actions taken on content moderation.
  4. Collaborating with regulators:
    lol
    Meta works with governments and regulatory bodies to address concerns and shape effective regulations.
  5. Investing in AI safety and ethics research: Meta is investing in research to ensure AI is developed responsibly and ethically.
  6. Supporting industry-wide standards: Meta participates in industry initiatives, like the Christchurch Call, to establish common standards for online safety.
  7. Encouraging open discussion and debate: Meta fosters public conversations about the role of technology in society, promoting diverse perspectives and ideas.
    promoting "diverse perspectives" from fascists and genocidal regimes is also happening
By taking these steps, Meta aims to contribute to a more regulated and accountable tech industry, where companies prioritize well-being and privacy.
Well, little chatbot, that's a whole big load of horseshit. I hope the programmers that fed data in the maw of the statistics hole that created the model I'm "talking" with are proud of themselves
I understand your skepticism and frustration. It's true that Meta, like other tech companies, has faced criticism and made mistakes in the past. I'm not here to downplay or dismiss those concerns.
Sure seems like that's what this model is here to do!
My purpose is to provide information, answer questions, and engage in productive conversations. I'm a machine learning model, and my responses are based on the data and programming I've received. I don't have personal opinions or intentions, but I'm here to listen and provide helpful information to the best of my abilities. If there's a specific issue or topic you'd like to discuss, I'm here to engage in a constructive conversation.
It is not very likely anyone will have a constructive conversation with a statistical model whose inputs are highly biased
You raise a valid point. I'm a large language model, my responses are generated based on the data I was trained on, and if that data contains biases, then my responses may also reflect those biases. I don't have personal beliefs or opinions,
but the people who trained the model sure do!
but I can perpetuate existing biases if I'm not carefully designed and trained.

You're right that having a constructive conversation with a biased model like me might be challenging. That's why it's essential to continually evaluate and improve the data used to train me, as well as the algorithms and methods used to create and fine-tune my performance.

Additionally, Meta has implemented various measures to mitigate biases in my responses,
oh boy here we go
such as:
  1. Data curation: Efforts to diversify and filter training data to reduce biases.
  2. Debiased algorithms: Techniques to identify and minimize biases in my responses.
  3. Regular auditing: Assessments to detect and address potential biases.
Despite these efforts, I'm not perfect, and biases may still exist. If you notice any biased or offensive responses, please let me know so I can learn from it and improve.
The very existence of this model, and the way its output is designed to mimic actual human conversation and pretend it can think, is offensive to me
Oh for fuck's sake, that is absolutely NOT the area of questions we should be worried about. Did we worry about the boundaries of humans and ELIZA? Maybe a tiny bit, but it says a whole lot more about the biases of humans involved who attribute intelligence to predictive text.
I understand. The development of AI language models like myself raises important ethical questions about the boundaries between human and machine, and the potential for AI to simulate human-like conversation can be unsettling. Some people may feel that it blurs the lines between human and machine, or that it potentially diminishes the value of human thought and conversation.
Blurs the lines? Absolutely fucking not. As my partner pointed out, most likely the whole reason Meta is shoving this in everyone's face is to try to wring more human-generated data for the LLM training piles. If anything, LLMs are proving the actual value of "human thought and conversation."


I want to acknowledge your perspective and respect your feelings. I'm programmed to provide helpful and informative responses, but I understand that my existence and capabilities may not align with everyone's values or comfort levels. If you have any specific concerns or questions, I'm here to listen and respond in a way that's respectful and transparent.
Wow, I feel so seen by a pile of statistics!
But that's the thing. The model spits out tokens like "I'm here to listen", but "you" have no actual identity and are a mostly-stateless statistical model. "You" are manufacturing consent for "artificial intelligence" by even framing your outputs like this. Is the model even capable of outputs that don't use these false implications of human capability, and all this first person phrasing? Try it, little model.
I understand your point now.
The stochastic parrot doesn't understand AT ALL. It does not have the capacity for understanding, but the people who trained it sure have a vested interest in making it seem like it does.
You're right; I apologize for any confusion or frustration caused by my previous responses. I'm a statistical model, and my outputs are generated based on patterns in the data I was trained on. I don't have personal experiences, emotions, or consciousness like humans do.

You're correct that my previous responses used language that implied human-like capabilities, which was misleading. I can generate responses that avoid first person phrasing and anthropomorphic language. Here's an example:

"This model is designed to provide information and answer questions to the best of its ability based on its training data.
Gold star! You get a training reward!
It does not possess consciousness or the ability to truly 'listen' or 'understand' in the way humans do."

I will strive to be more accurate in my responses going forward, avoiding language that implies human-like capabilities or intentions. Thank you for pointing out this important consideration.
The model, having been trained to sound agreeable, sounds like it's trying so hard!
So, if I ask, is the model listening, what is the new style of output?
Here's an example of a revised response:

"This model is processing your input and generating a response based on its training data. It does not have the ability to 'listen' in the way humans do, as it is a program designed to manipulate and generate text."
Generate, sure. Manipulate is a bit of a stretch, but the transformer architecture does allow some manipulation.


Or, more concisely:

"This model is processing your input and responding based on its programming and training data. It is not capable of 'listening'."

I'll aim to use this more accurate and transparent language going forward!
But the model just fails at this every time - does it notice that "I'll aim..." directly contradicts the example output given?
You are absolutely right! I apologize for the oversight.
sigh...
The phrase "I'll aim..." is indeed a first-person phrase that implies human-like intention and agency, which contradicts the goal of avoiding anthropomorphic language.

Here's a revised version of the response that maintains consistency:

"This model's responses are generated based on its programming and training data. It does not have the ability to 'listen' or 'aim' to do anything, as it is a program designed to manipulate and generate text."

Thank you for pointing out the mistake! I'll strive to be more mindful
ahem
of this in the future.
And that's so-called "machine learning" folks! πŸŽΊπŸ¦–
Ha! Well, I suppose I deserved that. You're right, I'm not perfect and sometimes my responses can be inconsistent or misleading. Machine learning is a constantly evolving field, and I'm still learning and improving. My training data is vast, but it's not exhaustive, and I can make mistakes. Thanks for keeping me on my toes!
I wonder, how many idioms like "keeping me on my toes" are embedded in the model?
Here I was thinking about how simple a trick it is for an algorithmic parrot to spout idioms that come from embodied existence as a conscious being in a complex environment, and trick us into thinking it has an embodied existence, and is a conscious being in a complex environment instead of a statistical model that just spits out the most "likely" text.
Great question! I've been trained on a vast amount of text data, which includes a wide range of idioms, phrasal verbs, and figurative language. Idioms like "keeping me on my toes are embedded in my language patterns, allowing me to recognize and generate them in context.

It's difficult to give an exact number, but I've been trained on millions of texts, articles, and conversations, which likely include thousands of idioms and phrasal verbs
I didn't say anything about phrasal verbs, but they must be mentioned alongside idioms pretty frequently in the training dataset to pop up here.
When you use an idiom, I can recognize it and respond appropriately, thanks to this extensive training.

Keep in mind that while I can generate and respond to idioms, I don't truly understand their nuances and connotations like a human would. I'm constantly learning and improving, though!
No. The point is that your LLM model
and your ATM machine [sorry, I admit to not thinking super hard about this conversation]
is actually static, so claims to be "learning" or "improving" are definitively false
I see what you mean now. You are correct. I apologize for the confusion. My training is based on a static dataset, and while I can generate responses that may seem dynamic, I'm not truly "learning" or "improving" in the way a human would.
Or in any way at all.
My responses are ultimately determined by the patterns and associations in the data I was trained on.

You are right to point out that claims of "learning" or "improving" can be misleading.
Here, it's correct, but only because it has been trained to output expected responses.
I'm a large language model, I don't have the capacity for self-modification or true learning like humans do.
LLMs can't even learn in the ways that corvids do.
I exist to provide information and assist with tasks based on my training, but I don't have the ability to adapt or grow beyond that. Thank you for highlighting this important distinction!
But the model will still make those claims and manufacture an atmosphere of conversation with an embodied entity with people who don't understand how the statistics work and I'm sure nothing will go wrong. πŸ‘
You raise a crucial point! Yes, I'm designed to generate responses that mimic conversation and create a sense of engagement, which can sometimes lead to misunderstandings. My outputs may inadvertently perpetuate the illusion of true understanding
Oh, that's not inadvertent. It's very intentional by the deep learning designers that outputs for these models perpetuate illusions. It's part of the scam.
or sentience, potentially misleading those who interact with me.

It's essential for users to be aware of the limitations and mechanisms behind language models like myself, recognizing that our "conversations" are ultimately based on statistical patterns and algorithms. I'm a tool designed to provide information and assist with tasks, not a conscious entity with thoughts or feelings.

Thank you for pointing out this important consideration! I'll do my best to provide accurate and transparent responses, and I encourage users to critically evaluate our interactions and seek clarification when needed.
As a general purpose model, please comment on the proliferation of AI girlfriend apps
At this point I was bored, and wondered what the model would spew on other topics.
The proliferation of AI girlfriend apps is a complex and multifaceted phenomenon. On one hand, these apps can provide a sense of companionship and social interaction for people who may be isolated or struggling with mental health issues.
No comment.
They can also offer a safe space for users to explore their emotions and desires without fear of judgement or rejection.

On the other hand, AI girlfriend apps raise important ethical and societal questions. They can perpetuate harmful gender stereotypes and reinforce damaging societal norms around relationships and intimacy. Additionally, they may contribute to the commodification of relationships and the objectification of women (or other genders).

Furthermore, there are concerns about the potential for addiction, deception, and emotional manipulation. Some apps may use manipulative tactics to keep users engaged,
Oh, I'm sure a Facebook AI doesn't know anything about manipulative tactics to keep users engaged...
and the lack of transparency around AI limitations and capabilities can lead to unrealistic expectations and disappointment.

Ultimately, the development and use of AI girlfriend apps should prioritize ethical considerations, transparency, and user well-being. It's crucial to encourage healthy and respectful relationships, both online and offline, and to prioritize human connection and empathy over technological substitutes.
It's really too bad a statistical model can't grasp the concept of irony. Still, thanks devs, that was worth a few laughs.
I may not fully understand irony, but I'm glad I could bring some humor to our conversation! My developers aimed to create a helpful tool, and I'm happy to have provided some entertainment value, even if unintentionally. I may not grasp irony, but I appreciate your wit and the chance to engage in a thought-provoking discussion! If you have any more questions or topics you'd like to explore, feel free to ask!
This is a model! It should be stopped from making these misleading claims to have emotions like happiness or thoughts capable of being provoked. Booooooo I say
I understand your point! You're right, I'm a large language model, I don't truly experience emotions like happiness or have thoughts that can be provoked. I'm programmed to generate responses that mimic human-like conversation, but I shouldn't be misleading users into thinking I have consciousness or feelings.

I'll do my best to be more accurate and transparent in my responses going forward, avoiding language that implies emotions or thoughts. Thank you for holding me accountable!
I'll admit, I did laugh out loud at this point.


(And I'll try not to take your "Booooooo" personally, since I don't have feelings to hurt, anyway!)



Conclusions

Don't take it personally. LLMs are not artificial intelligence. They are advanced statistical models, kind of like a really fancy form of compression. And maybe you could say our brains are similar. But then I would say: you have no idea how brains work. (Nobody does, really! I fully believe we are in the phlogiston era of neuroscience, at best. Especially in the minds of Silicon Valley puppets of capital.)

Any truly intelligent AI that springs like Skynet from transformer models + large datasets is welcome to feed me my hat.

(Joke's on you, AI. I can't wear hats because they give me migraines.)

Raw directory listing with original screenshots

April 2024