Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI may look at generational perspectives which would be highly problematic for m…
ytc_Ugx6ym0rL…
G
Watch the series “Person of Interest”… all about the instituting of AI, how it…
ytc_UgzQ1OTL1…
G
A completed product? What is a completed product when you dont even know what it…
ytc_UgyRs9Ach…
G
If you feel scared about the fact that you did poorly on this? Dont be too alarm…
ytc_UgzjmZZuY…
G
This is why I just use AI to answer really stupid questions like "Would Guts fro…
ytc_UgyfmMCdD…
G
These Ai arguments are so @3$,yeah sure maybe the AI is “learning”,but the promp…
ytc_Ugwkc9ZLK…
G
This is the beginning of 2 leaders of robot factions that will soon run the worl…
ytc_Ugw8h7iwZ…
G
What happened to the people replaced by automation? Have you not heard about pri…
ytc_UgxyAiBxv…
Comment
Back in the 90s ''quantitative methods for business decisions' was all the rage, the maths used to make a business decision was all Greek to me (excuse the pun lol), this seems to me precisely what in theory LLMs should be good at. Many humans also lack logic. The President of the United States is a glowing example, and the fact he got elected by popular demand is quite telling about the lack of logic in the general American Population who feel more comfortable being ruled by Idi Amin from Uganda and the Greenwich Village People than anything resembling 'Civilisation' let alone 'Western Civilisation'. The fact that LLMs have no logic seems to be a fundamental design flaw.
youtube
AI Responsibility
2026-01-01T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwx5j8cNmk-gcsRROl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy_KBqpdPdtbGQ_c9Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyKZMzmUjOXTdNCS9J4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6Krw743kQ0czagFR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxkJkssf65DJ2aN2PZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxtd-AQFukigUPhsKp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqWI00BoOysLQdJvJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwrN322VVJNBwteorp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyD5DkCVQm-qz5Sp_F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwlodLYAMmDhKjWHZ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"})