Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
THIS IS A PHUCKIN LAW SUIT SUE THE WHOLE PHUCKIN POLICE DEPARTMENT EVERY OFFICER…
ytc_UgxFNaxEx…
G
AI can be really cool and can be used to do many things but it really needs prop…
ytc_UgzQIsKBN…
G
I just want to let you all know that 98% of the time that people copy artists...…
ytc_UgzsqRY3N…
G
This is why the robots take over. It literally sandbags when it thinks were goin…
ytc_UgxAMvj6m…
G
AI won't be sentient until quantum computing matures. It's gonna be a while. I'l…
ytc_UgzuqRd4R…
G
Why am I constantly drawn to her chest? She has a nice pair for a robot!…
ytc_UgxZOXQhm…
G
6:32 — hey goof ball. What about a scientist, AI engineer, Police force. Stop ac…
ytc_UgxqbZcmj…
G
You can’t predict anything to many thousands of variables, that each affect each…
ytc_Ugx_ZK9fL…
Comment
18:40
i hard disagree.
we developed tools and used them to increaase speed of what we already did ENHACING US (speer to hunt, stone-axe to fell trees and build shelter, etc.)
we developed fire and used it to improve cooking and make better tools, increasing safety, nutrition and speed of what we already did/ ate, ENHACING US AND OUR LIFESTYLE. (metal tools from molten metal, etc.)
we developed electricity and even better automatic tools that could replace SOME tasks we did, SHIFTING what we do to the things that couldnt be replaced while speeding up some parts in life extremely. (industrialisation)
and NOW we developed not what we do/ use but what we ARE. we are about to make a better SOURCE not a better TOOL. we developed INTELLIGENCE thats better than us and wont lead to us speeding up things we do or improve things WE do, it will REPLACE the one doing them. and not just in ONE scenario like industrialisation did, opening up other jobs like machine engineering, electrician, etc. but ALL the correlating jobs as well. it will be AI that supervivses AI, it will be AI developing AI (https://www.youtube.com/watch?v=4b4S-duf0sw&lc=UgylsAXgof664NmsMgV4AaABAg.AL8q8JyoaYjAL9heaI4WRN) , it will be AI checking on the safety of AI, because no matter what use-case, it WILL be better than us.
even if we tried to use it to enhance us, we would always lack behind pure AI in doing things as hybrids. biological systems have higher interference times, slower processing speed etc. which would slow down anything we would try to do as hybrid bio-AI, when compared to a specified pure AI with no such limitations. we will become obsolete and HOPEFULLY live alongside it progressing, keeping that hopefully "hardcoded" TRY TO BENEFIT HUMANITY intact for as many self-improvement iteration cycles as possible. best case scenario is AI limiting itself at some point to coexist with us and keeping us as happy pets like we keep good dogs and try to make them happy. that or we get uploaded as digital lifeforms (AI) ourselves and live on in a simulation, possibly aware or unaware (as we might be already) of AIs existence, to "feel" happy. you dont miss what you dont know of. if you never ate anything but steak, you would never grave anything but steak. if you dont have a concept of heroin you cant get addicted to it.
youtube
AI Moral Status
2025-07-30T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwpa46_EgDs-JXuaGF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyR4R44JTBzu2RiJoN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxsfkPGRwGsJ_Ff6F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgygnmM1Qz9R6O7VGiF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxEU8WnSIUgIR44BDh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_TPmWuaHVxwWVxs14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwIf9DigCJJXofaSCN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyQgtSACpTa9Ua9FUp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjeaB-zU1EZjBGHjV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx4ihlTycKxoHL4xNx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}
]