Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Agreed.. Especially when data seems to be the bottleneck to AI being useful, and…
rdc_kr57tkm
G
This was really mind bending, so, to add some levity here:
Eliezer Yudkowsky may…
ytc_Ugxrc-m3D…
G
I was told back in the 1945s and 50 years that technology was going to make life…
rdc_j6gt4et
G
I love drawing since i was a Kid And BRO i was terrible ať drawing but i free al…
ytc_UgyWGBpZt…
G
Super interesting to realize that screening through search results in Google can…
ytc_Ugzm1rQb6…
G
Yea if you dont like the thing you are doing, you should just stop doing it and …
ytr_Ugy878jXT…
G
You want real safety from AI? Abolish all AI. That is the only way to stop AI fr…
ytc_UgxLOQTya…
G
He ruined it the time he started saying: we might live in a simulation!!! So, so…
ytc_Ugwu0oONC…
Comment
The fact that it was Sam Altman asking for licensing made it laughable. Just a form of regulatory capture to cement OpenAI's primacy and shut down competition, especially from open source alternatives. He can say he loves open source all he wants but everything from the hearing basically amounted to "Sure make irrelevant toys if you want, but if you risk our core business THEN you have to pay heavy fines (that big daddy Microsoft already pays for us easily <3)"
youtube
AI Governance
2023-05-24T06:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx4u0ZPkS1dqxpVTAB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwtpAF4XCUHqK9BHV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzwezGGDTZGyMnvGq94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwAFVURiUHsbHETDYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLFbI1UE3K2BVUJth4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwZbolBMKLNc-lKUHR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHSP9xUAopkFkm3AJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw6UmmHlkPnN3LDP7d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxwu_K11S6W07C9mG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNRldY9Gthh127De54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]