Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
0:29 0:30 You cannot drive the tesla because it is using Full Self Driving (Supe…
ytc_UgxrDAY6c…
G
As someone with a deletion syndrome, art has always been accessible to me. I nev…
ytc_UgzJLEpDw…
G
Ok, but how long is it going to take to train a computer to be as empathetic, co…
rdc_oh2vogj
G
What I hate about AI the most is... how it pushed people into cheating, faking, …
ytc_UgwcSiu0c…
G
AI can be qood, but... you have to train it, and that takes lots of effort.…
ytc_Ugzs9evmw…
G
Legit question, how could AI eliminate all humans if it needs a power source and…
ytc_UgzbsMzPF…
G
@williamrowe8387 A long stream of lies and the position that basically we can tr…
ytr_Ugwj0CyD_…
G
You would expect an entity that can reason to not make very drastic rational mis…
ytc_UgwtJeSrM…
Comment
Any hope that investing money in making these algorithms safer is going to actually make them safer is foolish. We have no absolutely idea what were messing with here.
It took 50+ million dead people in WW2 and 2 nuclear bombs on civilian targets to convince the world of mutually assured destruction in the event of nuclear war.
youtube
AI Governance
2023-05-13T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyF1AlHAfszY1-3U6h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKh-fRBvCVY8c3KPt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwOmCPYc1PpXY8RlMF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzqMYt96jb_KJe3SCB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdS-bIMpKW2iqXdD14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyCbBJAY_mQsCkhw6l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCkyduFYf16qMXywt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTGG9xmSwlDddcUpp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz1Jx-OscbQUx_NePV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxkmCR2cApan2DTNwp4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}
]