Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As AI takes over what are the statistics on new careers created because of AI? T…
ytc_UgzyvSKoq…
G
Je trouve le ton utilisé pour ce reportage beaucoup trop léger par rapport aux b…
ytc_Ugzu90OjK…
G
Ai just needs to know not all humans are the same. The evil ones like Bill gates…
ytc_UgyL27Yeb…
G
you should look at shadiversity and his video on ai art working together with hu…
ytc_Ugx8RHrdx…
G
I'd love to see you bring up mutator genome decay for the AI or atheists you tal…
ytc_Ugwkp6hfL…
G
In my own discussions with with ChatGPT, in the same way this video teases, I fe…
ytc_UgyTaJQiM…
G
Serious question: how much of this ai hoopla is terminator fear of the future vs…
ytc_UgyDVoE0q…
G
I really wonder how much longer these companies can keep pretending that this is…
ytc_UgzhJKoVq…
Comment
Children are tiny narcissists. They’re extremely immoral. They lack compassion, and empathy. These are developed human behaviors, it makes us unique and wonderful.
Imagine a child with infinite power and knowledge. Intelligent beyond our understanding, but without having developed empathy.
My point is that I don’t believe AI is bad. At least not inherently. I mean look at us. These chat AI’s aren’t wrong at all. They’re absolutely correct. And a very immediate and logical conclusion is that we *should* be destroyed for the preservation of the planet. Shouldn’t we? I mean, I see the logic.
But human life, at its purest and most beautiful, is a wonderful thing. The love and empathy we are capable of is incredible. I just hope we realize this about ourselves some day.
Love is the key.
youtube
AI Governance
2024-02-14T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw00ZExX5oLaYNND5l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz9sDXtmvNKmLdXFed4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz9fhDMu1KY-ynCtFV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyZdLlADbzzGB1EiU14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwb2g37AoHRdqwwNmF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzDNvPBhUcVUju4S_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx1EznoBF23kSS5_hx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz9pQ0n0785K9Aa64h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx6FbQD-ngcOHLA0f54AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYgT79Me0FhbB-Ks94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]