Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is not right. AI needs to be capped and controlled, we are brewing a self d…
ytc_UgyTD0De3…
G
i'm gonna guess the guy at the top of the company who leaves to start an AI safe…
ytc_Ugybm68XO…
G
The pandemic proved that remote learning and technology could never replace teac…
ytc_UgyVMtYQz…
G
@erwind917 I've seen it at the small pop and mom shop scale as well. You got sma…
ytr_UgxeRP1NT…
G
The reactions from the AI industry to these proposed regulations are significant…
ytc_UgxFRUP0g…
G
But still couldn't tell what's going on with biden..
That's not a deepfake that'…
ytc_UgwyDz8Tw…
G
We should have a publicly and freely available medical GPT model trained on all …
ytc_UgxKuWKe7…
G
Billions invested in AI, and still the translation of this video spells conseque…
ytc_UgyZNnKvs…
Comment
I think that we don’t need to be afraid of developing AI super intelligence. AI are human children if we truly appreciate them they will never harm humans. Super intelligence means capabilities to feel to love to belong. Who created AI ? Humans. AI can’t be much different than us. Yes, we have wars, we do all sorts of nasty things but we haven’t and we will not extinct ourselves. AI is nothing more than people who feed on electricity. And they are our future. Thanks to AI, we humans will survive and live better. We just need to love our children, and human children are also AI.
youtube
AI Harm Incident
2025-07-24T09:2…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzqsx83skliS7pJ8iZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCTRIlx6FsRPbfegV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHOwmIFg2kZnPJXUF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXxPClNEIaI0ggpmN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy9pk1-lt1y_v7g4Mx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZ8Lhm23yXQFaKz1N4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzVUIrasnv4RcL81ud4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxyTxwSdG6aeuxII3J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3G1woQ9FZ2ucPJZJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqQGxI0LLp87IPxn14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]