Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Niel is so full of himself. He's smart, but he thinks.. like a lot of "Famous" s…
ytc_Ugz_x3Byv…
G
There's also people corrupting AI by feeding it false material or corrupted mate…
ytr_UgxsEId_G…
G
While "they" try to excuse themselves by talking about intent, give them the new…
ytc_Ugxdwj3RL…
G
Agreed about self driving cars not being the complete solution for cities, but o…
ytc_UgxWZC46R…
G
Can you choose to let the car drive itself or does it automatically do it regard…
ytc_UgzTeWZQT…
G
Waymo had an operating loss of roughly $1.23 billion in Q1 2025.
They belong to …
ytc_Ugwc8cfdg…
G
11:29 one that wants to destroy humanity, and workers , you all see in 2 or 4 ye…
ytc_UgzRkA2cv…
G
This video failed to address whether the autopilot has lower or higher chance of…
ytc_UgzRJBNNr…
Comment
The real danger is associating AI output with intentions and thoughts. The guest might be an expert, but is loading AI with conscience and anthropomorphizing it. That, in turns, gives more credibility to AI and increase the blind trust people give to these algorithms. THIS IS THE REAL DANGER.
youtube
AI Governance
2025-10-15T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaD2umPmOzA1LG5td4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKngtaKnq6I9KX2ht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzWWUxbn57DTangeKV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyLAsUWxo-NgOgj6It4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8SULrkjW6bu0TdkR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyPBrin1P8uN0sKIs14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwQag7JWbslwo1Ewdp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4J4GDO4ey3uKlwoh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVkTPe3ipP2mY6UOB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugywf6bvJ52Tch_vb014AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]