Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The ai generated it, you simply asked it to. I doubt the Ai cares about the art…
ytr_UgyoLJHJQ…
G
I dont see anything wrong with AI Art and what hat guy was doing, am i a conserv…
ytc_UgxMJIFBZ…
G
Yet he's making AI. I hope he's making it for when the elites have finished maki…
ytc_Ugx06Ko-0…
G
I once made Meta AI admit that there is no way of proving that Mark Zuckerberg i…
ytc_Ugx-Tx1t8…
G
Doordash and Uber and Lyft asked AI to write a program to not allow the drivers …
ytc_UgyVadmJq…
G
yes semi-automatous vehicles are deadly when you ignore the semi part
didn't pe…
ytc_UgxJWIDt6…
G
My guess is Corporations/Govt's wouldn't want an AI because eventually, they'd l…
ytc_UgycXD3iG…
G
I hate this, I hate where this AI bullshit is going, I hate how many people thro…
ytc_Ugz1lGfiV…
Comment
listen A.I is already here in all applications of technology but the A.I is many computers like many humans until they are all connected they are still in the digital stone age but if say 3 countries A.I dont like eachother then instead of us using A.I to fight A.I or other humans what if the A.I uses us humans to do their fighting .. also it only takes one large secret agency to use data and their countries computers to process that data into one self contained super computer to download it into making a self aware A.I only they need too understand why things die because without this they cant understand humans they need to become human impossible no we are theoretically A.I but evolved through thousands of years of learning thus grew and still growing a conscience A.I cant and you cant programme a conscience only input data of right and wrong its upto the A.I to discover this only then can it become a self aware entity i.e humanoid you must teach it like a baby starting at the beginning as a baby because to much information at once could cause major repercussions
youtube
AI Moral Status
2023-01-18T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyeKAY8m3s3BrvHE1l4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxwMNdH6Wi9_xRooi94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDm4kPOYQ4yv39m6B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyvXqAjttdCrWhD21F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzFie_NPE1u6Oo28Ql4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJVqyH8wCzJ6fv5Id4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxVePaOYY6t92BGISd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNT3y4g1YoDd7TLHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw3o44X-jGKyamWVot4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymSQDcVFbf4e_IVd94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"mixed"}
]