Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@kitty79er i'm pretty sure it works by changing every pixel of color to a slight…
ytr_Ugzkp1-gG…
G
That took a hard turn into white supremacy so fast lol. I just wanted to watch a…
ytc_Ugz9tZrS6…
G
So is human generated art, though. If you look at 5000 paintings, and then paint…
ytr_UgxGKgEem…
G
Its so infuriating that people keep comparing digital art and photography for ai…
ytc_UgxqaD3Dg…
G
Really, you're ranting against windmills?
As turbines age, they can be repowered…
ytr_UgwbfvML2…
G
Time to unfollow everyone that is in this video. You can hate Ai but don't bully…
ytc_Ugx_kYzTJ…
G
Don’t you see, that wasn’t a lie! AI is just an acronym for “Actually Indians”…
ytr_Ugzr88z3Y…
G
I’m pushing back on the idea that angel engine is good in any respect. I think I…
ytc_Ugyyw7xWy…
Comment
Well, no, it can´t. We do not have AI. We have algorithm-based models we like to call AI, but this "artificial intelligence" is just like an "artificial plant", superficially it might look like a plant but actually it has nothing to do with a plant, just as AI has nothing to do with intelligence.
The state AI is in, is the maximum we are able to reach, nobody on this planet has the slightest clue how we should get from where we are to a reasoning AI, much less to an AGI. Even though the video calls ChatGPT a reasoning AI, that simply isn´t correct. ChatGPT isn´t able to reason, it just emulates the behavior that makes the user believe it does. We don´t know how to create a reasoning AI and we most probably never will.
The only thing we do know is that the way we are creating AIs today is not capable of creating a reasoning AI, so we are basically at square 1 in that regard.
youtube
AI Governance
2023-11-07T22:0…
♥ 86
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxzKGalQSK67lWN78l4AaABAg.9wlWhylI3f49wmg1MP2sjI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzIdMdkN-t-S5wmQeN4AaABAg.9wjpXcETXAA9wpbV6AU66y","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzIdMdkN-t-S5wmQeN4AaABAg.9wjpXcETXAA9wpxkhW-Ow_","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzIdMdkN-t-S5wmQeN4AaABAg.9wjpXcETXAA9wqFfI-E84i","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw9zweTXjrvNJnPm9Z4AaABAg.9wilZcfVL-z9wjI4zGWqS_","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxoNNo88O_UKODXVtt4AaABAg.9wibGptt6Z59wlI8xERlAe","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxoNNo88O_UKODXVtt4AaABAg.9wibGptt6Z59wmwefxtyAh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxoNNo88O_UKODXVtt4AaABAg.9wibGptt6Z59xwvm75QeU7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxjKY41wlmdohLo9rV4AaABAg.9what6HG_6SA0E-Hdx1pLJ","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugz9QA_Z_tDTFWspqeF4AaABAg.9weFWW-n-_rAI6lzTcIL3s","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]