Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Boy jumped up and was hanging with his cast on. No one else see that?…
ytc_UgzKks4Lq…
G
I saw a thing were AI was asked to generate pictures of the evolution of mankind…
ytc_UgyxvRXnT…
G
And it’s listening and spying on you 24x7, but it’s only a pet robot, of course…
ytc_Ugx0rcJz4…
G
You told almost exactly what I was thinking, but gave no solution, limiting AI t…
ytc_Ugwh3LpVO…
G
I suppose if humans could exist with half of their internal organs removed inclu…
ytc_Ugz_P2d2r…
G
Yeah, but industrializing art is not a good thing, art is much better as how it …
ytr_UgxDM36lp…
G
Bro, there's another thing about train stations, whether true or not, they check…
ytc_UgxiXEcnL…
G
I am on Uber now, see you on streets until self driving cars takes it away. My d…
ytc_UgyPEMWv2…
Comment
My contention is that the pattern matching "reasoning" AI we see today (ChatGPT, etc.) is a completely different technological approach than what's needed for AGI. To develop AGI, we almost need to start completely from scratch. It is also my belief that we will not see fully developed AGI in our lifetime.
youtube
AI Governance
2023-11-06T19:0…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzoGjXyTSmb9neYwAZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUTQ5jpl_rVc3tibR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwvbSKJs0_aXDC8APd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzu9b6FFR0k02r5Wfd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJcLoeGku3QOBFwgZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzk0lgmcAfeoT5ceqh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzhQE-AhgqG_I8NKBt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxzKGalQSK67lWN78l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwoQgZtjpa0IxwAC5B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsZUYgPRMZDRBQ17h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]