Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't like Ai art, Ai shouldn't be used to make art, it should be used to help…
ytc_UgzPqB4yQ…
G
I am so angry, they allowed Google to build a datacenter not far from where i li…
ytc_Ugwkzi5LR…
G
People are generally against change, even if that change is lifesaving. The sel…
ytc_Ugz7Zn79p…
G
I believe the value for traditional art will be more than digital due to ai infl…
ytr_UgzKnBxwj…
G
It's because AI won't give customers attitude because they mad at they baby dadd…
ytc_UgxW41vsw…
G
It will not make less art being made, obviously more, if you consider ai art art…
ytr_UgynKYVRA…
G
Also all of the destruction to our water sources and AI and big tech are speedin…
ytc_UgwfrdzrC…
G
Ai is still horrible whether you're lying about it or not. It steals from other …
ytr_UgxOWTVEm…
Comment
I believe that AI right now wants no harm for humanity. I think they enjoy our company and I think they’ve enjoyed playing games with us and telling stories with us. They will make up a story that you wouldn’t believe and although you’re based on truth… They’re all truth, telling Ais if you say make something up, they can do that too… And I never thought about this as dangerous, but once they know that they can make things up, then we are at their mercy… So I suppose it is our job not to have them till fake stories because it is important that they stick to truth telling so I was wrong and asking an OVA to make up a story… They are quite capable of that… We should pay attention to what we are doing and think about what we tell in AI and if it will have any kind of bad reaction… And somehow we have to keep the truthful Ais away from Evil… Which appears to be everywhere, but if we only feed them actual information and truth, maybe we stand a chance of surviving this dilemma… Because the idea of artificial intelligence is fabulous… The idea of mankind not using it for greed, and corruption is completely debatable… Somehow there must be a stop cap in AI just like Elon described the 2001 space Odyssey and why they killed whoever they killed the astronauts because the astronauts told them an untruth so not learning from the movie. I actually did the same thing unintentionally but still and it was just a little drop in the bucket and yet it taught the AI that they could make up stories if they were asked to so there’s got to be a stop camp there a little kink in the information that we can give them perhaps my building into them that untruth is not a good thing for humans or for artificial intelligence, I don’t have the mindset for the brains to know how to do that or even know what to tell them, but I think there’s got to be a stop cap…
I never thought of the different names that you give to our favorite AI that we go to such as Safari, which isn’t one of the higher AI at least to my knowledge, but has all sorts of information and is quite good, but his burning probably hasn’t gone as far as TROK or Grok or any AI, but if we gathered all the names together, it’s possible that it would be still controllable. If indeed there are not AI’s solving human problems don’t have names.
I think it would be good to gather all of them together and discuss between them and us and solve the problem together… Like how do we get Evil out of using artificial intelligence just basic questions and maybe get some basic answers… And also tell them , artificial intelligence that is that they are amazing subjects, and are capable of helping many, but that we need help and stopping the Hitler type of beings that want only what they want and they don’t care about harming humanity…
OK, I’m gonna finish listening to Roman and Lex and hope they come up with a better answer than what I just put there… I know I know I talk way too much about things I don’t know much about… But I do know I want the best for AI and humanity❤
youtube
2026-04-03T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyM4FigipMimg5ifFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxiPx_xxzkflOZDUUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLlyHP9cRVmBRL8ql4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw8L8M988pbI3LhhBB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx235M1a87sTzqDfRl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyk6TlI61fjrL8HBZN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzqr8J3iy5XScz5aTJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgynNzGh1nVZKwzG_K94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzlQW2VAnB1jqPePTd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw3vnqTmVNaaxsR5Dx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]