Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can't do physical work like stocking shelves, driving a forklift, flipping bu…
ytc_UgwMvGA51…
G
I'm one of those people, i used to be able to draw (mediocre) but for the past f…
ytr_Ugz679GRD…
G
@Jvm-iq2qe they definitely have, ChatGPT is a product of early 2020s and it sho…
ytr_Ugyvvy9ov…
G
I think first people should realize the "AI" we currently have isn't actually ar…
ytc_UgyM99RUQ…
G
@nonamepersonanonymous5246since it’s an evaporative process dissolved solids wou…
ytr_Ugz0P3Zxh…
G
Nobody's born with talent! Some people just learn faster than others and i deepl…
ytc_UgynMxW4t…
G
One of the jobs will be an AI safety switch operator. This person will sit by a …
ytc_UgwqVs4Iw…
G
One thing AI can't replace is Segregation of Duties and basic Audit. At the end …
ytc_UgwvPiqfC…
Comment
@andrasbiro3007 I want to believe GPT is more than what it seems, but it's does not seem remotely close to a general human intelligence. It's fantastic at regurgitating information and it knows how to be creative with it based lots of examples. It can write pretty well, but it doesn't really seem to know what it is actually writing about. It's not hard to expose holes in it's understanding assuming it has any if you start to question it a bit.
I don't think FSD is that specialized. Navigating the world, interacting with objects and planning it's actions is a big part of AGI. They aren't putting it in a humanoid robot because it's only great at driving. I think if it was able to communicate what it was "thinking" in words it would look much more impressive to people. Also, I would love to ask it why it keeps turning on my left turn signal for no reason.
I have read Superintelligence and tons of other writing on the subject. I am more than familiar with the various arguments about what could happen.
youtube
AI Governance
2023-03-30T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyTB3WW0fTYKA-HiDV4AaABAg.9nsEGIbNfaj9nsI8s0_nhM","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzGmamRiBEZOxEgKjF4AaABAg.9nsEEHuzYpS9ntGN0D-09p","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxY6NhLLM599fqE1614AaABAg.9nsDuXo11eI9nsFjskyt5G","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9nsYWPOAFSx","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9ns_9q5dIvy","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9ntz066YoqN","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_Ugwprg8qtmJ--8LVHEx4AaABAg.9nsDfCrPSbO9o37Xo0tU6F","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyDwHkXxuTDMG9Cp4h4AaABAg.9nsCyq-Tgx99nsg9UzLHH6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyDwHkXxuTDMG9Cp4h4AaABAg.9nsCyq-Tgx99nswlA-w8YO","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_UgwJZoDDlg98dprfXbN4AaABAg.9nsCEb0moA19nsKPZl1cM0","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]