Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think it would take a gov investment into new infrastructure. I think like Elo…
ytc_UgwrXpfJD…
G
Developer will develop AI to enhance work more advanced and replace them as time…
ytc_UgxUXKmvA…
G
THE BIG QUESTION IS:
What is legal and what is illegal for ai to do. The should…
ytc_UgzHDpRU8…
G
AI run amok will create the wealthiest most powerful oligarchs in human history,…
ytc_UgyrQ37zx…
G
The whole point is being missed. Think in comparative terms. The industrial re…
ytc_Ugw_02p3z…
G
Holy shit, the man clearly says that he doesn't want AI to stop to a halt. He cl…
ytc_Ugw1w_pAw…
G
1. It is not too far to take all jobs.
2. It is just the beginning of AI.
3. Ple…
ytc_Ugw1Hm-2m…
G
For me it seems counterproductive. AI is being monetized....but it's also puttin…
ytc_Ugw-3LiWY…
Comment
These studies are actually along the path predicted for a safer AI future. The finding of these behaviors and implementation of checks and balances or re-workings of future AI to limit the corruptible models.
youtube
AI Harm Incident
2025-08-10T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyTiW8FBSuH8pUr-sZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjfDkovW7edE4635Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzVcyKiVGyxtT7sPf94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCO6TKTAFgxcdkkoR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugymz11Qrw9ZffK0g-J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5X36LHOnTAya5Yhh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFCS9nW4BNL9Vknqh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwTb6CkNj-uBou0aep4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwljDq1JxY0DvOpR4d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwQv8ddMh2tyWVppzF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]