Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I‘ve not yet read the fundamental question: humans are regulated by Law. AI is n…
ytc_UgwKhBEWk…
G
If they argue that AI learn like a human does, that means they create like human…
ytc_UgxaxETyI…
G
We must take back the creek the automatons have gained power over the art indust…
ytc_UgyHfgAHI…
G
An exchange with Gemini. Asking why companies would pay employees if AI can do a…
ytc_Ugyjbppvf…
G
This was in a movie. All the Tesla was self driving and blocking the way for peo…
ytc_UgzQRU6_D…
G
This is exactly one of the jobs that AI can do much better than a human.
AI can…
ytr_UgyX5PHYK…
G
Ai may be advancing but its still going to be a bot for a good while…
ytc_UgztA6Vrg…
G
1 Year ago we were talking about Level 2,3
Now
ChatGPT is preparing for Level 6…
ytc_UgxvIgEVN…
Comment
Is that the lesson and the message? There are many other possibilities in addition to not wanting to teach AI to lie. Another is to worry if AI is trying to think for itself bc it can come to the wrong conclusion, ie to let the astronaut die. It is negligent and murder to let someone die or do something that will kill a human. Machines follow commands without compassion and human’s have laws but the most important law is to LOVE and not cause harm. Only a deranged callous psychopath would want to kill or allow harm.
youtube
AI Governance
2025-09-01T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyEridiT-MacI95HlV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydhiOUNmoWa8U5jDt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyzBjWE8EFcMTou_u54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxIe1zo3zPSfTeh3pJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvEBuxtikF6QJKIsd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-L-MGQdekYYFsjxB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9uIjm_HlHt83xqIV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugw7i924xQVUqYrtPFR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx1wJAirZe1OEy3APJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyi9KRWjSaHiXhNZuV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}
]