Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Google letting this guy speak up about this topic and YouTube algorithm didn't …
ytc_Ugwc2tZMW…
G
I quite like the AI key. I rather have it be set to pull up another AI though.…
ytc_UgxdijB_D…
G
I think that human made art will become more valuable as AI art becomes more pre…
ytc_UgzaJl6DQ…
G
I've asked Chatgpt around 10 technical questions and it got none right. NONE.
No…
ytc_UgziW71nx…
G
This is why we can’t use ai for this stuff. Personally I feel like all we should…
ytc_Ugw9-K9SN…
G
Not arguing that AI art should be copyrightable, but being a "good" or "real" ar…
ytc_UgxD08xNO…
G
Software is never free from bias, and anyone trying to tell you otherwise is gri…
ytc_UgzTiQznd…
G
Well let's keep AI out of policing and make them actually do some work. Or we mi…
ytc_UgylsIFEY…
Comment
Yes. I am a computer engineer. The main reason why ai can be seen as dangerous is simply because of its goals and its understanding. Programmers program robots often with only 1 function, goal, or mission. With this being said, the robot will stop at no end to reach the goal as that is its only reasoning.
youtube
AI Harm Incident
2024-01-05T16:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytr_UgyVOFEGbFFeK2rEvzJ4AaABAg.9zMgNmt6hpl9zQaAGQ5-SC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugx4pSghbHWMv7oIsDd4AaABAg.9zMbKXNNLHj9zO4kHbMP8o","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugx3paPklZJomECCL9B4AaABAg.9zGdZ2t4lHq9zKJ-Ox7n0U","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx3paPklZJomECCL9B4AaABAg.9zGdZ2t4lHq9zL7DBDMXY1","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugx5DDiTLmI2a_1stCl4AaABAg.9zE_8A_yJtg9zIWgHlwtxC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx5DDiTLmI2a_1stCl4AaABAg.9zE_8A_yJtg9zIeCOM_W3m","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwmgnvKPmX_F2t9lKt4AaABAg.9zETItT5p-Y9zEYYT87I-c","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugwn5LwEsORtsedjd9J4AaABAg.9zCaTL4rdMt9zF0opthf6R","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyZGVM-TALUvyq9V6B4AaABAg.9zATmgA1TTX9zBuR7ZaacH","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_UgxlfPcW66TsvN9NTqp4AaABAg.9z7D865hyrl9z8tyURFi7Y","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]