Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Among other things, it seems chatbots are still WAY too sycophantic: they appar…
ytc_UgyV-igDJ…
G
I think the reason people like something is because it makes them want to make t…
ytc_Ugx90OgiR…
G
President Joe once had a dream
The world held his hand, gave their pledge
So he …
ytc_UgxlENeiI…
G
Why he thinks we need so much sofisticated robot ? We don't need a human look li…
ytc_UghjZLxbv…
G
The lady who started it is from Austin TX and you should invite her. She went on…
ytc_Ugy3QAI08…
G
AI will just do computer jobs that's it. Anything physical it won't be able to r…
ytc_UgxmK_eZb…
G
I recently started writing song lyrics and using Suno for the instrumentals and …
ytc_Ugyh-F7dg…
G
This man can only make cars that crash. How is AI going to take over?…
ytc_UgzRL8PQy…
Comment
Yes, best human in the world are still better than best AI in the world, I mean today in 2024. But chatgpt5 in 1 year, chatgpt6 in 3 years or maybe chatgpt7 will, with no doubt outperform every human in every single task. Why ? Because LLM are showing that we are wrong with our definition of intelligence and knowledge. I mean, the only one and unique human intelligence. It is just an emerging phenomenon. And there is no reason consciousness or even free will are not the same. Those are just functions that can be implemented in different ways. Evolution just implemented them on biological system in million of year, and we've done it on silicium in like 5 years !?! And our brain is capable of processing few "instructions" per second. Imagine an AI with the same kind of complexity than human brain or even 1,2 or 3 order of magnitude less complex but with perfect knowledge learning and restitution and capable of "thinking" millions of times faster than humans.
youtube
2024-05-08T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzxsa2tNs_ahq0giHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxWI9tvIKc-v1cQBF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzUjr4VxZxZqUMKWB14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyb_FvT0Rd2RdTkQcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyr6ZNBAay9sk65w1V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgiBgI9mhuGQRttzx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyas7MK0HpUJPNjohZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxf0fLX4ygAHbYoOjl4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxY6H7HZsRGELIzsAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5vx6I2-1-dABgzc14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]