Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is just a giant yes or no machine with a giant reference library. It will nee…
ytc_UgxiIBhFe…
G
@ifyourespondyourmad.2409 you know that chefs prepare food based on fixed recip…
ytr_UgxgoJZZK…
G
Surely companies like these have to realise that they are paying so much for AI …
ytc_UgxpUi0J1…
G
There’s a difference between being an image generator that generates poor qualit…
ytc_UgxnBZJYA…
G
what do you think of Kim jung from North Korea; Paul Kagame from Rwanda;
iran a…
ytc_UgyXbPKjg…
G
@chrysalis1670, just ask your junior about some ai tool that give them extra ti…
ytr_UgySRfSi9…
G
lol this reminds me of Revelation 13:14–15 (KJV) where it says...
“And deceiveth…
ytc_UgxoSdw4_…
G
Maybe a solution would be developing an ai in tandem with current ai which sole …
ytc_UgyFpytjz…
Comment
The second risk, isnt a real risk. This guy might understand computers but not humans
AI being smarter than us is irrelevant.
The thing that makes people dangerous isn't their intelligence it's their ego, their sense of self that drives them to things to the detriment of others for the benefit of themselves.
There is no reason for AI to have this and it wouldn't occur naturally and would be incredibly difficult, and stupid, to design.
The biggest threat of AI is it being used in a bad way by humans.
youtube
AI Governance
2025-06-24T19:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgysReljx0YFfDJUoeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxwkJmmJDU4BeEggFF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzHt7X_0KILg4qZgJl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLmN9pdddU6_CuVlZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxocueAugDnK793kOV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzy3B9eb2TUoTUTDHp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgygavX_03nF2-E1nhB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTZyttwUgAZCDduF94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwhdlHLxVfFaP8rq5t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySUwC4oqolJpKVeHV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}]