Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm totally for it. Imagine if all humans died off they would have a chance to k…
ytc_UggUkRCpz…
G
Yeah it's past alarms that's human fear and it's irrelevant at this point . What…
ytc_UgxFINXNB…
G
I agree that character AI should be sued. Tremendously. And a lot of people are …
ytc_Ugy70gc-M…
G
The actual argument is that you simply shouldn't be using LLMs at all, especiall…
ytc_UgyZfnaKL…
G
@JohnSmith-x3y8h the topic was about self-driving cars' sensors. My comment stat…
ytr_Ugwedi7Fr…
G
Listen to how dumb that sounds lol Full Self Driving but Tesla says you must rem…
ytr_Ugyesx7pP…
G
If I want an AI to be used by a doctor, or a lawyer, I want an AI that can take …
ytc_UgwTg8olo…
G
They probably say the same thing about ai generated music they are also too lazy…
ytc_UgyjFjh2F…
Comment
Thank you for sharing.
I am fully aware of building and deploying AI as a fully commercial and top-down approach industry.
Besides the cybersecurity issues in data management, ethical and responsible AI implementation and the legal international jurisdiction matters, I have three real concerns:
1-. Do we have enough water to build all required data centres? and,
2-. What provisions will be implemented to ensure the most vulnerable people around the world have acccess to the AI benefits in the short term, as anyone else. In other words, How will the non-viable commercial market (bottom-up approach) be looked after? or will the plan be 'business as usual'.
3-. Who audits the auditor? In other words, who makes all the final decisions?
We must discuss and define these issues upfront with honesty, transparency and integrity. Zero BS.
youtube
AI Harm Incident
2026-01-01T10:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx_v-dCz0Mtslo_SOx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugwwzk_nrMlc6XHV9b94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyihyKACroqb6Z8no54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmaQ6IfJ8tOTHlY0Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgykWt77yLHntn7iH0V4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgybXpPv6UUUMvem2114AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxV9np8GA8OQSWMBy54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFBqfab4U_yHPzM_F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFR4JG7py52CD49-R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8Xa97XXEY2Z_1R_14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}
]