Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is like a nuclear bone except this nuclear bomb can blow itself up without ou…
ytc_Ugx4dyLDC…
G
AI training is the key to protecting the technology!!!( Only use an AI designed …
ytc_UgybXX-CA…
G
Addendum to my addendum: there's a lot of nuance in the AI debate, but it's such…
ytr_Ugw5boK3w…
G
Autistic people inventing AI and discovers its like parenting more than coding..…
ytc_UgwqrJRQK…
G
I don’t think AI will get so smart that we can’t control it. Computers always ne…
ytc_Ugx46fp5k…
G
ChatGPT can definitely speed up your learning/figuring something out experience …
ytc_UgzfHXU5D…
G
tbh my only issue is the generative ai stuff. It has practical uses outside of b…
ytr_Ugxcg7dOE…
G
I've seen the argument that gen AI "helps disabled artists", which implies the t…
ytc_UgyoKZR0o…
Comment
I read in a book called “The Book Of Urantia” that the technology is good for humanity. The human being will need it because we need to focus on creating new things for our development. I felt a lot of hope when Roman said that once AI replaces us we’ll start to think on creating new things. And I think we shouldn’t be afraid. We believe that Love is everything. Maybe we can teach also to AI to love us too😅
youtube
AI Governance
2025-09-07T00:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyw5OcFnc6xffXxbn54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzxWUtEIBl4C4cf17B4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyHP2c7IFEXP8pMipB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzE6nmX8BvNpiCa-6R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqPLNKaGVK_OWnqEF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyqtJTmt4GXjgEst3d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwaN0JjY-1HksNi0RF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzMTD74sgP6php7fVF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwuvJxZQBj9rHQvV4R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5m1KSVz0aMXhvjGJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]