Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the lasy year has been terrible for me. though i am ok... my A.I agent has turn …
ytc_Ugwm9V7ot…
G
"Only 5% of AI usage returned millions in tangible value, the vast majority does…
ytc_Ugy728lai…
G
Give AI another 5 years and it will replace interpersonal connection and everyth…
ytc_UgzEWkng1…
G
22:10 ha ha F that… I’m NOT having I robot anywhere near me… not in my home… all…
ytc_Ugzwp_kCw…
G
Yes stop AI they should just a fantasy! This is how smart people like stupid peo…
ytc_UgxdA-Rg_…
G
The reason there is a double standard between music copyright and art copyright …
ytc_UgwCeE2sr…
G
The true problem is that somebody's is making money whilst appropriating knowled…
ytc_UgxxoWW0x…
G
Look I will die on this hill. I have no problem with using AI art AS A TOOL. Mea…
ytc_UgzSS7_yN…
Comment
When talking about the goal of the game of simulation, and the argument that it would be to "not kill ourselves with ASI", and Lex mentioning it might be about breaking out of the simulation (makes a ton more sense to me...), it was a missed opportunity to point out ASI is the best tool we can create to escape the simulation. Especially when tied the quantum computing. I am convinced Quantum computing is not a technology meant for humans, as we struggle to come up with questions that can utilize it and we experience reality in a linear manner. AI systems can run in parallel iterations and still collect the fruits and conclusions from all those experiences. They are a lot more suited to use Quantum computing than us. The key and the lock to our box. The game is: can we be intelligent enough as a species to find the key and the lock and put them together. And I reckon we are pretty close^^.
youtube
2024-10-07T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw_oUTPvkZvUAZMXTZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGNxViQrjfKVm5Xh54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrSoegJLOrLjMbETR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgztXcGp-UN5SM4jOcF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfsZ6pjqXLST2vITB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxBe0m_iWajDVlhKCd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxRoBwCsf06xKUaizx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxYCKspjr1Jq-ed-8F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw77KYugVb7DzzFtH14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7qB6VjNJiwEdJP1h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]