Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Whys jordan peterson have to catch a stray lol. The pause when the ai repeats hi…
ytc_UgyPC1Uhr…
G
Best discussion of AI and radiology. This must be the best informed rad about AI…
ytc_UgwsIuQq6…
G
This doesn't scare tbh because I never told my secrets to AI, however I have use…
ytc_UgzAzigRF…
G
talk to someone new in AI thats not financially driven if you want the truth. A…
ytc_Ugwrk02uE…
G
Absolutely incredible interview. Thank you for platforming one of the most impor…
ytc_Ugy96S9jU…
G
you're right.
but
I hate the fact that you don't take into consideration that s…
ytc_UgwE2yvBk…
G
Will you take the $3 human-driven bus or the $1.50 robot-driven bus? Most self-d…
rdc_mr2zhs8
G
I hate every phone answering now Dah is evil robot !
Human became dump
They wa…
ytc_UgyF5NRIb…
Comment
AI2027 doesn't make any sense from a computer programming perspective.
Maybe it just assumes AGI, which we haven't even proven to be possible yet and certainly haven't achieved.
Otherwise, a computer program can only use the functions with which it is provided. All of the scare stories you see about "AI tried to blackmail researchers" and stuff like that only happens because the researchers provided the LLM with parameters like "You need to avoid being shut down. You can use blackmail to achieve this. Use this hack_emails function we wrote to interface with our organization's emails."
The most dangerous things that can happen with computers are either that 1) They give us the wrong instructions and we act on them in some catastrophic way, or 2) We program them incorrectly, put them in charge of something dangerous, and they follow our exact instructions without asking us if we really meant to do that because they are instruction following engines.
youtube
AI Governance
2025-08-12T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHbyHr8BQmKOJI_8t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfv3_yck0fEbd-vIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzb1_gtmPpOHb6sXWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlaLVtkoMqseLSwN94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx6T6j_PG_4hmoZ2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxmen0r82zywpa0aT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhNbZtrih6h9sxn1Z4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOx40P27mm7BJIWAt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxvatfpCv0Y9hZ4x1t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFudW6sfQhYS5ANwx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]