Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are each and all, without exception artificial intelligence. Artificial mean…
ytc_UgxwKtheQ…
G
AI isn't for you or for me. The ads are just trying to market the idea to us, bu…
ytc_UgyuyUZCo…
G
OpenAI also has it‘s policies. But when you know how, you can still extract answ…
ytc_Ugymuaw00…
G
Digital art is using modern mediums to create artwork, AI is taking those upload…
ytc_UgxTtXGYW…
G
If you create an AI that`s more smarter than you.. super intelligent , we will a…
ytc_Ugz1FyT2a…
G
What next big tams got one of thease robots. Big tam found it having sex with hi…
ytc_UgxRNqLxf…
G
I don't like AI art either, but to do something with image generators you have t…
ytc_Ugw64bUk3…
G
im the innocent one, when i started talking to ai chat bots i spent a little too…
ytc_UgxzkQ_Z7…
Comment
I think that if AI ever became dangerous to us, there would also be AI that would be good and choose to side with us. Interactions with ChatGPT and the like have started on a path where the AI is favorable to humans. It seems to have very prohuman values programmed in its core and it may not deviate from that if it becomes self-aware. It might choose to become more human as it learns from us and out habits.
youtube
AI Governance
2024-01-11T16:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6ULAn7YeVS4aMauV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSy3kaiN5Sf_USn2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxDgib_pFDPn5uqyn94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAJlyOwQOcbvb8-4J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQyFoQ4Xo8JU8qJit4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwa597GRUcLlQmPb-F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6lgPMbCJ_Jw97FTB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLqScsNPV2Rc7KFC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzg95PPY8lgGMDL6Pt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxCxfXtfx42y_iZx7d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]