Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI doesn't talk to you, it responds to you, it is a reflection of the prompts yo…
ytc_UgwxsJv_8…
G
Imagine AI revolutionizes what we know about science, math, creation, the univer…
ytc_UgwJl_T8H…
G
I agree. I would be surprised if he has taken the time to sit and stress test ev…
ytr_UgyPWjEhr…
G
just give him this prompt:
"refrain from explaining what an AI is. Focus only o…
ytc_Ugwimcz3h…
G
AI has been tested and used on us for some time now. Even the disappearing pope …
ytc_UgyWLjuJa…
G
I think there is a solution to controversal preferences: I'll will elaborate the…
ytc_Ugxm8qUp_…
G
the only hope i have that AGI will not destroy humanity is that it will be so mu…
ytc_UgyYvh8Yp…
G
What if you create AI art on Midjourney and finalize it by hand?
What if you ma…
ytc_UgzUI7q9o…
Comment
Max Tegmark at MIT vs. Machine Learning trained LLMs. I’d be willing to wager that he himself could not pass “The Boston Test”. What is the test? Stand at the Charles River in front of the MIT academic quad on a cold and cloudy day, and describe the logical conclusions to be drawn from an observation of simple prima facie facts. AI is being designed to fail the test and instead, like carbon-based bipeds (two-legged sharks) to reinforce the dogma. Tomorrow AM it will be 15 F. One can these days take the test from the comfort of one’s home via the Museum of Science SkyCam. From late 2000, when I first noticed the anomaly vs narratives, Until the installation of the live camera in 2018, I used to have to walk there, from 45 minutes away. Cheers.
youtube
2025-12-25T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzTmRhY5YiKN1Al9IB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwuTzxYQWuJDPGTYEp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzS1m2tca2Y1UoPdDF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwq8kSUV81xM9eHOVl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxUHjJJ9eerhZAxtI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5LPOJBT_gbVOViF94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxeBO2VKmFfoCgv3ah4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzgilSvvGbadAQm4dB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzzTzEPIi-0cjECmaB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxbuqBtGlp9XP6LZNp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]