Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1000 A.D. : you must have sword
1800 A.D. : you must have gun
2000 A.D. : you mu…
ytc_UgxvFATDM…
G
AI will reach a certain level, and then when it is approved by the upper echelon…
ytc_UgxGUR6Pr…
G
Because she was talking about the ethics of AI development and not the technolog…
ytr_UgwcdOUgC…
G
This is one of the most interesting Lex podcasts yet. By the way, while I'm list…
ytc_UgwAGhLEX…
G
Insurance deductibles aren’t stupid, people would be filing a claim over a broke…
rdc_lla8kxf
G
Everybody knows the AI is mediocre at best. People who are touting it as the "ne…
rdc_n80gugn
G
The decision about whether to use AI is the same as the decision was about takin…
ytc_UgwpP7UVb…
G
It is not true that AI cannot forget. Every time it is trained the dataset can b…
ytc_UgyhWolnP…
Comment
My intuition over 40 years ago training as an engineer told me that seeing the introduction of computer controlled machines will put many a good man out of a job. Fast forward to now what's the point of AI being great for healthcare and education if it can eventually kill us all. How such an intelligent man could not foresee this with such intuition that it can become a danger to humanity is beyond belief! Go back to over 40 years and give a man and woman a job,even if they hate it,it gives them purpose.
youtube
AI Governance
2025-08-17T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyTIc9kMpWtFCXMW4B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwM4BByYjAELwHGmax4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxh9WegVkl9g6oxLhl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyAoQdylcmRb6iGygx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz6mvKWhjeHLwRclH94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyo52EzWE7OyCNjZBx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxsmWiNiHINS7LGySp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyt5PJlKbJ-3WAmnx14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWw6WCa7RchHddSzB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyvGsagGE73y7hYX6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]