Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is so bad for the environment btw, it uses mass amounts water, increased use …
ytc_UgwYJbU6P…
G
To be honest I’m going to be super anti about it and refuse to use anything AI g…
ytc_UgyZMlW6L…
G
People who write to artists, "you need to use Ai to keep up with the times" prov…
ytr_UgzsAVYiw…
G
Shut the duck up.
Not taking to you but she is not a robot this thing is bullshh…
ytr_UgxQUbEHX…
G
If everything will be done by ai then how humans going earn and spend. Ai need t…
ytc_UgxKoUdxU…
G
I was asked to make a ai image as part of class...
...I didn't listen, cuase ai…
ytc_Ugwk52-za…
G
Well it works for scientific papers and other fields too.
A single source is pla…
ytc_UgzA--ZfJ…
G
I suggest you interview Yuval Noah Harari. He is an Israeli historian, philosoph…
ytc_Ugyfhkjw8…
Comment
It becomes clear if you use these LLM's long enough, that there is no brain in there. No thought process.
That's what scares me. SOO many people putting faith in and legitimately "informing" themselves and their opinions based on what these LLM's are generating. What's worse with these, is that we're incormporating them into more ad more tech, and growing them at alarming paces when we don't really understand enough aout what they'll do and how they'll operate as they grow. People are afraid we'll be eliminated by some rogue sentient AI with intangible intent, but I fear a far less dramatic scenario. That we ultimately lose control of, or rely far too heavily on what is effectively nothingness. Words strung together in ridiculous series of mazes linked together, in control of the flow of information, technology, resources etc. I don't think people realize how dangerous this could be, especially with the advancement of AI generated video and audio. We are also integrating this technology into our weapons, our resource management. We are setting ourselves up for failure at the least. No, I don't think we're anywhere near AGI. I don't think we have to be for the dangers of this tech to become a reality.
youtube
2025-10-15T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx-epRa3w5FfCNs-Lh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2PnJOa8dM8arkrVV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9ml2DzUggVkdJ-4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz241Cy9m3-fqmcn354AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyIFGk6tCItgBp7V4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugww43cHU9ErtCnvRZB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwxwFr__8Gur_VzsnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz0j1AgtucfAjX79gl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWSO0QwXrdr1u8iVx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFlOe4NQwrqBxfX4F4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]