Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Pushing the AI to the wall is kind of scary if you think about it. 😂😂😂😂😂😂…
ytc_Ugw69ISJ_…
G
Not entirely, my company hasn't hired anyone that's Gen-Z in years I assume, eve…
rdc_oi15ac6
G
So, we are all guilty until proven innocent
This is another reason I would nev…
rdc_oaz5h3t
G
I'm not convinced on consciousness or sentience, but certainly it's very interes…
ytc_Ugx72Eoma…
G
Any company confidential data shared with AI systems hosted outside your company…
rdc_l59dnj7
G
I worry that Palki Sharma will also be replaced by ai. I hope not or else i will…
ytc_UgyDcqHpP…
G
1 robot: oops I think I messed up-
2 Robot: BRO *breaks glass*
The Worker: CHI…
ytc_UgzWEoEGm…
G
F****** AI!! Imagine how much better they will make this in the years to come.…
ytc_UgzGuFaZq…
Comment
But... If AI is 0nly 99% correct, it won't work in the long run. You can't build a spaceship or a bridge with 99% safety. It would mean that huge investments would go down the drain. My hope is that it is a bubble which will burst so we can laugh at the rich. How can the AI verify if it is hallucinating? Even if multiple AI agents go through the same material it is impossible to distinguish which is correct and wrong. Imagine giving a project that is a month long for a human, what if just one little hallucination gets by and completely wrecks the whole operation? It must fail. Or we must feed it nonsense using AI bots.
youtube
Viral AI Reaction
2025-11-22T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzcyQjBwIW16-RVHQR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgySM_ti1Fv8XJaW_rV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGoDK4B2TUFUUdFLZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyH6o2FbgiHhjrCQql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzBYf94cHSwzowtO0t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5-giop12FFXCzxzJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw4viIfOnbLvj5CNBN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzdls6f8mpQpEfHhTF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwhoizvwGKQSC2iE6N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw26Yj2I2XRMiFuRx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]