Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's why in future , there will be more and more advanced A.I and there will b…
ytc_Ugy5mxh9b…
G
I'm not worried my truck derates every single day I can't stand DEF systems 😤 it…
ytc_UgxuSrS6q…
G
GUYS, please stop fighting for the love of god. The dude is not asking if that's…
ytc_UgwwLkWf8…
G
So are we moving towards a future where we´ll just be consumers basically? All t…
ytc_UgwalW2ap…
G
looks like chatgpt learned the real truth about what happened in WW2. hence the …
ytc_UgxRkm5X6…
G
Alex, either that thing is NOT conscious OR.... Many "flesh-n-blood" humans are …
ytc_UgyvhKbMy…
G
dude like all of it is ai i swearrrr even their voices sound like ai…
ytc_UgwPJQef7…
G
I’ve heard of a lot of people tracing ai “art” and I’ve done it a little too jus…
ytc_Ugx4ytBWS…
Comment
Chapter: Why would a.i. go rogue?
- it's not that a.i. would go rogue, it's that a.i. is rogue per default and we don't know how to align it to our interests.
- let's say we want to train an a.i. to get coffee. The order to "get coffee" seems simple, but actually we want it to get coffee AND not break things, AND not hurt anyone, AND not spend all our money, AND ...
- we want it to "get coffee" and know and follow all of our values.
- right know we don't know how to do that, so any order becomes dangerous.
youtube
AI Governance
2025-10-25T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzuloiXX9NyhPcCerp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoKATJs_-p_pyisyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxkMTZpL3o1OVxgbYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyN8lUbmNWdk2dffs14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBhtnqoukhPTl8FSd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzzTOouq1je9BWmqSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_0e9quQvUALEUqVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwf5wLWAQ5s-arN28B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgygEzIeTg02bEQwoYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwRD3gum62tfJxg5lh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]