Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They are not as smart as people like this think. That is why they have had to r…
ytc_UgxYn39mv…
G
We have to figure out a way to tax AI to pay for the loss of jobs to pay for the…
ytc_UgyQGW8VN…
G
Once full self driving cars are released they will only last until Tesla has to …
ytc_UgyYUMyAY…
G
Guys mind: I can be from Jurassic park
Guy:hey hey hey look at me.
Robot:YEET…
ytc_UgyUkvW7T…
G
I'm having to train my left hand to draw because my right arm is getting too mes…
ytc_UgydcXz_l…
G
Thanks for the chuckle guys! Such brilliant comments. Sometimes I prefer to scro…
ytc_Ugyu9V1PL…
G
I always thought that a better comparison for AI images is photography. At first…
ytc_UgzLRUba4…
G
We don't know of the development of AI will continue at the same pace. It might …
ytc_UgysfI0wP…
Comment
When you prompt chatgpt you can’t control the way a result that is laid down by chat gpt, we are talking about a simple chatbot that responds to a simple line of English, just think about a complex AI who does more than text generation and can interact with physical world, you give it a command or a prompt (Assuming there is no technology being developed for controlling the response according to present) the response can involve harming humans or something on which humans depend! This is why before moving forward , in some or any way the technology for controlling the response of AI should be made.
youtube
AI Governance
2023-05-29T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwRlshBiVyBiajSq_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz3NKZSDERplzEJnrZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwOqiqWGt4uy5YmPR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzyhS9AMhGIFZKFl714AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwR-LylRvtftBI1mul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVTNYUGAdGuRDBrjl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUJgckQKiZTOfbwRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzG5-FxsjXwzbUJj94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxFwW-IsxI2QnnvFB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyczu-wKOr16Rfainp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]