Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Having this data on carbon emissions, environmental nazis will create narrative …
ytc_UgzfAhS5t…
G
Ok so i think we should use ai for stupid reasons like generate a meme image to …
ytc_UgwW3ePGh…
G
"Self-driving trucks aren't going to do anything to truck drivers. Their employe…
ytc_UgxYJIWMu…
G
born too late to explore the earth
born too early to explore galaxy or other pla…
ytc_UgwJXdLaO…
G
ain't no way I got an ad for an AI product in the middle of this absolute PEAK v…
ytc_UgwoFVy0_…
G
I'm a software engineer who works with AI daily, and ~90% of my code is written …
rdc_o9wluvn
G
Drivers will be the biggest domino, but let's be real: most will fall by 2040, t…
ytc_UgwuUwfGL…
G
@zinudscrazy Well, if you continue using AI, human art is going to be more expe…
ytr_UgwbKzSOj…
Comment
I built an AI model that creates actual mathmatical proofs for everything it outputs. And when I benchmark it on safety/ethics, it gets 99%😏. Sites going up today, with early access sign up, job positions, and other partnerships. Ill try to post the link here once it’s live
youtube
AI Governance
2025-09-29T13:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxSRbzxPW-33sKEZJ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz0sm3g-9GpBugiC2N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyKOhC1ACwXskGf9Np4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxnWTdfuEyDJ5xe0wZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz6d7qxvUzcMo1p6ix4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzk1_MQN6PHyec721F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwlVYj4ZGR_nZgKZjt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyqwpqMxj8VNhSAljV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"ban","emotion":"approval"},
{"id":"ytc_UgwLiDvBmOUP6yrToC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxgo4ah4AH0DboUPz54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]