Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
NOTHING good will come from AI. This will NOT end well for mankind. Laugh at me …
ytc_Ugw3B7BqM…
G
No, AI chatbots tend to *talk about using nukes and violence*, because they are …
rdc_kp0ib3o
G
Definitely. I had a go on ChatGpt for a little bit. I was talking to it about so…
ytc_Ugzd5LWh0…
G
This is how i know there isnt anything that fits the term "Artificial Intelligen…
rdc_llc7psx
G
Funny that MY own ChatGPT doesn’t do any of this, so I believe it’s the lie…
ytc_UgwVyHlSA…
G
This is false. You cannot automate anything unless you have defined what you wan…
ytc_UgxwFhl4Q…
G
If every single human is going to die anyways, why dont we just accept a higher …
ytc_UgwdBH_by…
G
NDT is admittedly naive on the AI/AGI topics. Some examples.
1. Positives will…
ytc_UgzOp1o9F…
Comment
Remember a couple years ago when AI wasn't as good or mainstream as it is now, and it was mainly used for entertainment about how easily AI gets confused or messes up? I think that's the best use for AI, entertainment at it's own expense, since ultimately it can't replace the real thing and/or doing it yourself. Another good use of this tech is for replacing or at least reducing tedious and annoying work. What I'm trying to say here is AI should be seen as a tool rather than a replacement. I honestly don't think AI, at least in the near future, will become as good as all the AI supporters think it will be. But that's just my opinion.
youtube
Viral AI Reaction
2024-10-25T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzt1qhGimeEr7avnK14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyVXYv85XnttULwWdp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzH3dmlBUJXTVqTbgN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwcejMkI9g2cvjkS1h4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyqh8hlem9hW9mA1it4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzkJy_9jwbltlTnYUN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxPnIIQwUcxGbYKY814AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyG7SODnGBpcKnK4m14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyEwzjnlrD8teK3Xa94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxunNVvsEjZ9bWptmJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]