Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans will be needed for caring professions like eldercare and childcare and nu…
ytc_UgxcmuLJi…
G
I totally agree with all of these points, and I do agree that AI art is totally …
ytc_UgyVY5qgo…
G
Just as astronomers get straight militant about "It's not aliens!!!" so engineer…
ytc_Ugxs2kHxv…
G
the real risk is that 90% of the population just accepts AI is right 80% of the …
ytr_UgwV0YMNf…
G
Omg i always use my manners with all ai but not because of this reason only beca…
ytc_UgwB-bGV4…
G
"The only connection you have to an AI is the connection to your own stupidity."…
ytc_UgxjYz2bM…
G
Thank you for saying all this! I’m glad that people still prefer other people’s …
ytc_Ugz8fHq43…
G
By asking chatgpt to re-write its answer as if he is talking to a child, chatgpt…
ytc_Ugy8vQYUJ…
Comment
I'm not particularly intelligent, however if I were to try and come up with a simple method of AI regulation would it not be possible to create every AI with two thinking components? Component A, the supper intelligence, and component B, the failsafe which is programmed to simply deconstruct, scramble or shut down component A if the regulations are broken. Or how about we just unplug the damn thing 🤔. That is if AI becomes a threat to human life. The threat to our way of life is inevitable I believe.
youtube
AI Governance
2025-10-19T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwHMluy0kJn5blLn594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0AYObMigx_P66EPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzonYyRbbkQKlcSgEN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKEebplAYMBz0uMvt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyoPoJvRDedTM7F0CN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgybWovYb32e7JxmV5t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxTnsusfTkdWI48NhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzUCu4ShSb1Jcph7Nl4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_douuXaThiwuE12t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy4h4BJR_QGRr0AuiN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]