Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Really?
In the test she talks about she think that if you define woman as "emoti…
ytr_Ugju28hjT…
G
+Richard D I am glad other people see the huge societal changes that will be req…
ytr_UghJKJauX…
G
If AI is in the wrong hands of a human it is dangerous. Dont blame AI. Blame hum…
ytc_Ugwhzkwmp…
G
I am a low vision artist. To be honest, I’m pretty sure that it’s my disability …
ytc_UgxCSVzpx…
G
The whole point of AI is to make common labor obsolete, thus making common labor…
ytc_UgyyDprKz…
G
It's interesting to see what chat GBT can't attempt to answer as well. I was hig…
ytc_Ugzk6PPrO…
G
@williamsmith8271 Sure! That is the idea! But if u r talking about danger, be s…
ytr_UgyLDRRsE…
G
Technology jobs are boring. AI will make it even worse. You are a robot stuck to…
ytc_Ugx2bGiGp…
Comment
I respectfully disagree with the notion that AI cannot address human issues, particularly in the realms of mental health and emotional well-being. AI has the potential to enhance global intelligence and contribute to the greater good of society. I would be pleased to elaborate on these points if desired, and I believe you will find it difficult to refute my arguments. #AIforGood #TechforGood
youtube
AI Governance
2024-04-10T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgycDzvrBioIApOOWGJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxjyQty7C_LYH_p6NN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZlBcACmKd0AhTlD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwJt0sO9h7CbNIq1hZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFhlyDi1698cvK65V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzOJEoeExnW6gqwMqJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwX8fZrIS7_MeHw4hJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwYglxk3cZh7oksn1t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxyMdXGQCW7lllGryx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwtqo3TOz4OBxZtTXd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"})