Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah this is a strange group. I mean considering Taiwan is fairly allied with a…
rdc_gt5p4ki
G
If OpenAI is going to profit from it then yes, they need to pay a portion of tho…
ytc_UgzvPx2ET…
G
Imho, training an AI model is NOT the same as a human taking inspiration from it…
ytc_Ugw91ef1z…
G
I go to chico state right now and have had classes with the professor you mentio…
ytc_UgywsGxPu…
G
thing with photography /photoshop etc they are MADE FOR ARTISTS TO USE while ai …
ytc_UgyzOYzVE…
G
The best part of their hiring AI is that it sets up an interview for you and whe…
rdc_n0m2w41
G
Robot players that can move in incredible ways? I think some kind of drone would…
ytr_UgyWudMJc…
G
You’re pretty late to the party. Have done this with every model and it shows th…
ytc_UgzNblQG8…
Comment
Don't just trust everything doomers like this guy say. I used to believe it myself, but it turns out their presentation is one sided and hence dishonest, and much or what they say is built on unstated and unverifiable assumptions. It's not true that we know nothing about aligning and steering these systems. Each time doomers quote unaligned behavior of today's LLMs, they conveniently fail to properly disclaim the relevant circumstances of how those results were obtained. Their definition of alignment, it you drill down, is a fantastical notion that is not applicable to any generally intelligent system by definition of how they chose to use the term. Go listen do David Shapiro to hear the other side of the story.
youtube
AI Governance
2025-09-24T07:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyvszysOPULvlnHPR54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyMPNb7NcXl_FFV5ph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYce413XYXAulH9W14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2nWBFqk2uj9sWM7l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwkzn30Eif96XevIgF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdEWYB95-zLV3zjWt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzp4mhn5StuQwrfYxN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyEMQseeu6rjo1ITd94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwtyw4SHKVnrzrCsEp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgLCKZr76H6gYhAtp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]