Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Why use ai when you can learn to draw”
-I forgot the channel name lol…
ytc_Ugz0CdKJf…
G
This talk and the subtle hints sets the stage for multiple lawsuits against the …
ytc_UgyNU8I06…
G
I know I'm in the minority, but these types of advancements really excite me. N…
ytc_UgwbCT8ob…
G
Automation is nothing new. Replacing people in mundane repetitive tasks is cons…
ytc_Ugz8cx0Fm…
G
Why are we beating around bush , those tech bros are all complete sycophantic de…
ytc_Ugy5iGAYB…
G
i think this is a basic take, and essentially conveyed by this video, but to me,…
ytc_UgweDdQ0n…
G
Very simple fact. You put your brain in the hands of AI. AI will disrupt all you…
ytc_UgwPTFbMT…
G
I find it unfair that ai art has open people who can't draw the ability to make …
ytc_Ugzv-jdi4…
Comment
we have not achieved strong ai but weak ai which is sophistication of automated instruction. weak ai is dangerous because of leaders who choose to believe in it and assume the assumptions of its decision making have wisdom rather than mere repetitive series of fixed instructions pattern just like in chess computer. it is impt to make those who choose to rely on ai to be responsible and accountable with their lives
youtube
AI Governance
2025-06-16T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhPETlAUy35Alrn2J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1nEUgXfIt7LLuejF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxgPW6rCy7paYRJuz94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyhKGopKNaLRyK29UJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-GdmuiRRHN2vccCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQxpA_JXRUjwkjySl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0p18807wntT9j7314AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyinjspiVuTNhnzu7p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyeXPx7zO5ARJ4QPrl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwFed3csJsg1KzBfGJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]