Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Many experts warned us that AI was not ready to be released to the masses, and n…
ytc_Ugz37SXEr…
G
I honestly don't get how my classmate use those ai LLMs because when i use it fo…
ytc_UgzXN7r5F…
G
I find the irony hilarious. Artists will get upset saying the AI is stealing the…
ytc_Ugz2PSHBF…
G
@MedicinalSquishing LLM stands for Large Language Model, it is designed to proce…
ytr_UgyXJzN1b…
G
Hoboken NJ has a multi story high tech automated parking lot that is rarely used…
ytc_Ugw39Hqj2…
G
Only to the point where it becomes a commodity, and that will happen soon if the…
ytr_Ugylpb8au…
G
I hate on almost all ai cause it sucks for the environment. I think only importa…
ytr_Ugy8DQN0_…
G
When I spoke with ChatGPT it told me there are more advanced versions of AI out …
ytc_UgxUjtAgM…
Comment
This is happening now with the cognitive distortions of repeating gas lighting in the media. We're Google now makes up its own truth even though it opposes facts and evidence.
And when AI doesn't know an answer, it makes up its own.
In my line of work we have to study ai and robotics to understand the minds of narcissists and Psychopaths.
youtube
AI Harm Incident
2024-05-21T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugynn3inHgiOH9z9x494AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx-84Pn8L34VKccSrd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkfH4QhPu8Sndl2zx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwIKdF93-yxTn7_WjF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzVJkpJMTyOm8wO_a14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYbCD3_rWo4-Jne6J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwCvhyXAEiKZFCxIe14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPgv3E_Qh7BHsyJp54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3oiFM5_IbKHUQYqN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxme13F1wNmS8easO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]