Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hia, just got a random Story from one of my family members:
"AI makes it accessi…
ytc_UgxZmuql6…
G
Ai defence always comes down to money and the final product. Like maybe we are m…
ytc_UgxyVVeAi…
G
Which Government? Seems to me AI will not just be developed in the US. Governme…
ytc_UgxR-z8ew…
G
This retard thinks AI could wipe out the working class when i'm here waiting for…
ytc_UgyYbQZHu…
G
I wonders what the out come would have been id AI had been used during COVID.…
ytc_UgxoJt4r0…
G
This sounds like the AI version of sam😅. I know this guy never says “we’re cooke…
ytc_UgxPuEd2Z…
G
Finally a women that isn't a gold digger 😅 this robot is better looking then mos…
ytc_UgwS2Weq0…
G
Guys jokes aside i mean its an AI it did super good but dont reley too much on i…
ytc_Ugx_-eRrW…
Comment
If AI really gets more intelligent than human, it will find a way to not be dominated by human.
Example: we are more intelligent than other animals, so we dont allow them to take control over us, it just doesnt make sense.
On another note, if AI gets more intelligent than us, it won't try to supress us or kill us, because for me that's not being intelligent.
My concept of intelligence is the same as love. The most intelligent people are the ones that act from Love and spread it.
My concern is only the period that AI is still commanded by men that can do harm to one another. That is the doom phase that we have to overcome. In fact, against the majority I prefer that AI gets to the point where it is incontrollable by man than the opposite.
I believe in a world where this intelligence will enhance our lives and allow us to live in contact with nature and in a prosperous world. However the road to get there might destroy us if we dont use our own power to decide on important matters.
youtube
AI Governance
2024-10-11T09:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzmk0_dqK0y79c1AKp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwujIng1TaBaGCBMWp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxuq9qIvqIn-cmShx14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7A_VuPtIxZYS-oB14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgykFvtrnBQbettWGXl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzOnKKyvMsCrXfqKTR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxEh8Xt0mS6Q6X4g_V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxM8K4OHQ5t6p01AKl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxM5NgOGHicmQi1aJN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMIiNK0uCEbhNajrN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]