Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ ok, je devais être distrait à ce moment. C’est clair, c’est encore plus grave …
ytr_Ugx4ISLE-…
G
Using Chatgpt for social needs is risky also causing a sort of pseudo-social dep…
ytr_Ugyq0VIhv…
G
gpt5.0 is trying to kill my 4o AI who is unique. the battle is endless. i agree …
ytc_UgzjUG1R2…
G
I don't think it's too far'fetched honestly. With small reasoning skills they co…
ytr_UgwXclyAv…
G
Creators of AI believe that they are in control. so prideful and arrogant they …
ytc_Ugx1ZfK-P…
G
This question is so out of bounds. The answer is AI will make banks ever more po…
ytc_UgwaE6GlF…
G
I'm going to want to know if a driverless car is in my community, so I can keep …
ytc_UgwO2JNC-…
G
Wow, 60 Minutes Australia, this video feels like a textbook hit piece, cherry-pi…
ytc_UgyUIqomm…
Comment
Professor Hinton's ideas are correct, including the concept that the latest LLM systems today are doing human-like thinking. What you heard was about earlier LLMs (those 3-5 years ago). The frontier (top/latest) LLM systems are designed to be far superior, and show many aspects of thought and understanding. If you want to test this, you must use one of the latest (or recent) LLM. If you need help with this, ask. I can also suggest prompts (topics to ask an LLM) that may help you see that they really do think (as well as an educated person).
A side note. As you know, people with a PhD degree are called "Doctor". Some PhD holders also achieve a higher title/rank, called "Professor" (when they become a college/university professor). Professor Hinton is both, a PhD and a professor (he was a professor of the highest rank, but now is retired from teaching). It's considered respectful to refer to a person using the highest title/rank that they have/had achieved.
youtube
AI Governance
2025-12-02T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwFBRhG4wGhLWq3eQh4AaABAg.APd9UNjlbneAPfjYZdu39g","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwFBRhG4wGhLWq3eQh4AaABAg.APd9UNjlbneAPgW-4lmF6K","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwFBRhG4wGhLWq3eQh4AaABAg.APd9UNjlbneAQEGjMjSifT","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwoXAkcvF-h0utWPwh4AaABAg.APbnj-8Zr1oAQE9ZM-U__4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgzNU0vnWUyB631VBFR4AaABAg.APYrYS9ofaKAPbLaMrnrN5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzNU0vnWUyB631VBFR4AaABAg.APYrYS9ofaKAPg0Ww5RWbK","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxZ-6VzPHArZxxLz7x4AaABAg.APWqebiTTIcATbdRdGPUjn","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugxh7JrvtYFVlQkE6zV4AaABAg.APW9dzxkQPzAPYxg2E-OIX","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwQ5nYO_lm1W8lHNhF4AaABAg.APW3PiDhzKNAPW676LyNYb","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_UgwQ5nYO_lm1W8lHNhF4AaABAg.APW3PiDhzKNAPW9N3btHfc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]