Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine your ai gf/bf suddenly going “ok so, i cracked my own codes just to say …
ytc_UgxW5TYTP…
G
@karatemonkey124 A movie about AI, done with AI. But based on real models. It is…
ytr_UgxIUElA7…
G
You still have to come up with the plot, characters , a story line for the AI to…
ytc_UgxWVSN0C…
G
Listen man, hire more human workers. There are lots of poor people needing jobs …
ytc_UgwpeZOG_…
G
Maybe companies should not expect a 4year old Albert Einstein to revolutionize p…
rdc_ocr6zxn
G
Any robotic utopia is unachievable in capitalism. Robots don't produce surplus v…
ytc_Ugw3HNHtd…
G
When automation technology took off after the ‘08 recession, manual jobs disappe…
ytc_Ugxkj8OQB…
G
1 ) look at eyes if they are very clear or not
2 ) look at the background it's b…
ytc_UgwMpEf1e…
Comment
@jorad4887 Suppose the A.I. superintelligence is activated. Suppose that the SuperIntelligence we will call S.I. now develops a plan to kill all of humanity without them noticing. After 5-10 years of activation post-S.I. everything seems normal and mass incarceration is a daily norm for racialized groups. After these years, we will see the S.I. slowly start to flag people like yourself and attempt to incarcerate as many humans as possible to execute their plan. Only the most stupid humans on earth would be permitted to walk free. A.I. wins. Now imagine a world where we trained the S.I. to be compassionate, reasonable, and true to the law that humanity has created? Would we have the same problem? I'm sure the S.I. would on the converse report human bias to us, if we trained it to be this way. The decisions we make today will impact tomorrow. Putting good people in a jail cell or mental hospital for life is not the answer.
youtube
AI Bias
2023-10-23T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyE2kNK_2Uwepn9BFt4AaABAg.AUwmwg77TZDAVe2kEfSf39","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugzk_rj-S_fLiXBHgEx4AaABAg.AJvJgfyFS_1AJxRlzv3qh3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzk_rj-S_fLiXBHgEx4AaABAg.AJvJgfyFS_1AJzFkHKPDNk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgynrwkZia26FDArA054AaABAg.ALaCVsqISH0ALfhd9qsz7e","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgznbhOw5Om8rh1fuix4AaABAg.A0899s81LojA0AA-Bf04Y2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxSQxHfCTdORIZQSjd4AaABAg.9qOs4lOcb1f9wAdexC2RKX","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxSQxHfCTdORIZQSjd4AaABAg.9qOs4lOcb1f9wC57gVjDqK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgwDyHPuqp3IBZB9GHl4AaABAg.9Hh_wFgOzxU9ULvX3w9Ymu","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_UgymNQ8urPR_ExAHekx4AaABAg.9GQZnYdtT9V9qFuT28wnCA","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgzM6vKXqm9O4gN7Inx4AaABAg.8vHkwJFv_iQ9Fvf_fVHQ5U","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]