Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If this car gets hacked, you could seriously be kidnapped. Even if it isn't hack…
ytc_Ugz39XFyc…
G
@CaliFinestProblem Since LLMs can only produce output that is statistically driv…
ytr_Ugw4VTSPV…
G
It has instructions on how to respond to threats and how to steer its responses …
ytr_UgyqLx-ZG…
G
I hate greedy corporate AI but as a person that uses AI chatbots just to you kno…
ytc_Ugz03i3d4…
G
AI, aka large language models, cannot comprehend, cannot extrapolate beyond trai…
ytc_Ugxts2689…
G
I don't agree so much with her. AI helped a lot of people to improve their lives…
ytc_Ugx6CzBUL…
G
Zuckerberg has the charisma of early AI, quite worrying that the people rich eno…
ytc_Ugy6SujyR…
G
How do i say my opinion on this in a very nice way?...oh yeah, here: i would rat…
ytc_UgxmU95iy…
Comment
As AI rapidly advances, millions are losing their jobs, echoing the shock of Japan’s post-bubble economic collapse in the 1990s. Factories, offices, and creative industries now face automation’s cold efficiency. Like Japan's "Lost Decade," this century may witness a slow-burning societal unraveling, marked by uncertainty and disconnection. Entire communities, once built on human labor, now struggle to redefine purpose. The world watches as technology outpaces adaptation, risking a silent, global crisis of meaning. - written by chatGpt
youtube
AI Governance
2025-06-24T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxz8pxIBJIm5PmIywR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwmG0OBC6oUPqTEOgB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy4P9QbEo_9vlGgSgh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugze7NxUzt0jG8CPsUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzRcN9o33tQLK8I-9Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwvpFUD_6NJmodoQch4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwVDuYOdgPinSwLLY14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTA7vVzlgcZNmuK3p4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2r3An0fsYlJ8CSEx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxWl8TK76CDvamUUTF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]