Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Killing" isn't immoral, murder is. ChatGPT here suggested justifiable killing o…
ytc_UgxJcvxa1…
G
not only is all this "AI" talk not even real AI, but its also a way for people t…
ytc_UgyjM7Njt…
G
@BeastJuanGaming Yes and he's not the only one using that analogy; 'scouts' (as …
ytr_UgxVP508y…
G
This isn't entirely true, right now ai isn't that good, but in the future ai ver…
ytc_UgwVJTzcs…
G
😂 you are all idiots I will just turn the electric off good look ai pathetic wor…
ytc_UgwWHqU11…
G
Isn't AI a logical evolution of intelligence (and possibly consciousness)? Like …
ytc_UgxhNhyo_…
G
Elon pretends he's part of an industry-wide effort toward self-driving technolog…
ytr_UgxRKBKrS…
G
"People don't need to work..." in certain industries -- white collar jobs, espec…
ytc_UgwStHNNq…
Comment
If companies lay off 90% of their employees and replace them by AI who would buy the products and services these companies are offering?
Aren’t companies still going to decide how much AI they will adopt into their organizations? In the pursuit of greed is complete adoption really in their self interest?
Some of these scenarios assume that AI will take over everything…even the CEOs and make independent decisions.
Look I’m not saying that there won’t be some drastic and challenging changes…but I don’t believe in a total apocalypse in 5 years time either.
youtube
AI Governance
2025-09-10T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxq6qyFohnxTigGlKZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"curiosity"},
{"id":"ytc_UgzgoK5xamcUuAxDaR14AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOvq5snK13_nnJ4nF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx15E8ZntFCkbib36x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwH8eeSbJ9ghTMkC0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzRrMo2-IL9vEmeSgV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzNCNpquKEOblVdn-F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyCByXdmbNO_eU_keJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtP3ZX6daF9qhQFpt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwYsKTeHdW3LqCH3BR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]