Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> Safety is in quotes because models like Claude 3.7 were not nearly capable …
rdc_o9w4ktd
G
Who ever comes up with these titles need to be fired. They are off shifting blam…
ytc_UgxtqmrN3…
G
I know AI is advancing and it's a risk, but can we stop using what AI company CE…
ytc_Ugx2C8Ecn…
G
how do we know its AI. I know its just a kid but would he have sudoku'd over AI …
ytc_UgyZuufIy…
G
Nope these robots should be turned to scrap metal immediately seen too many term…
ytc_UgwqvcZo0…
G
Would it be possible for Alphabet to acquire Uber down the road when self-drivin…
rdc_dfthjmp
G
We understand your concern. While AI technology is advancing rapidly, it's also …
ytr_Ugy_wA61V…
G
Police State, Dystopian Police.
0:00 You want to go to jail too? (extortion,…
ytc_UgyKkAExt…
Comment
It would seem the smarter AI becomes the more redundancies are built as a failsafe so unplugging it is a juvenile thought. Hitting it with a series of calculated emp strikes or even building it in without AIs knowledge seems a safe bet. It should be mandatory to have the ability to disable the system with ease. It's something you would assume think tanks have already solved in that no alternative is reckless and 100% irresponsible and 100% avoidable. Human error is no excuse when the stakes are this high.
youtube
AI Governance
2025-09-04T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz3a47Q2o3jZ4ZKVeh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTNov40IYNhUhULMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjjplBQ1ilwAy4WBV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx1g8rFTxmfdUaDZ_h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzV3MbFohPzOyY-wl54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvZdzDfRoDnXVwYVl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkZHRHVkT117W9u9Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzsyk89KjYwb4gPwjJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzwEWiWxfvKNyUGb9J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyj4gsfVUxJLiWDupB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]