Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Suppose algorithm is to make things better on earth. For this if robot decides t…
ytc_UgwrpQRA9…
G
What you’re seeing isn’t something ‘forming’; it’s a mirror of the data it was t…
ytc_Ugz_bJ6pN…
G
Just over half way through and no mention of the huge environmental impact of AI…
ytc_UgwhSfpqc…
G
“What Everyone Is Getting Wrong About AI And Jobs”:
Debate Extremes: There ar…
ytc_Ugy2mQA49…
G
It's not the fault of ChatGPT. It's the fault of people. People are easily corru…
ytc_Ugyv_D_za…
G
There is nothing an AI theoretically cant do, if you think your job wont be take…
ytc_UgyHJzdeA…
G
AI will be the end of the human race. Has anybody heard about the robots that we…
ytc_UgzX4P-T6…
G
I would be more worry about "1 month later... a pilot caused 367 deaths when xe …
ytc_Ugx4WaTkq…
Comment
This may be your most fascinating interview. I was hanging on your guest's every word. I am torn on the benefits versus the dangers of AI. Knowing human corporate greed and being the "first" at creating the best AI, we may be doomed? Greed vs. ethics and safety when it comes to AI. Which wins out? Greed/profit or sensible and safe AI? Fascinating.
youtube
AI Governance
2025-09-04T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgznuK4pv4sFpKl_uWt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxX7Evo237oG8GkZ4l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxqGbs24PEKZXTquMV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw6ZzdglwfLu5zedKJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuOT6jDp-nSJgZmrd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzVkQJmlMqUWhrTuFl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwevwzXETBafeBpVVx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcheQaYEVaz-3qkSN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyL27l_1InP7psG0PB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0C9nFKS7Gd4kiSJV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})