Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Laws need to come in to place here where any deepfakes of a person is illegal. I…
ytc_Ugw2ljCAt…
G
I’m 50/50. It’s the emergent abilities in the LLMs I think give the most pause. …
rdc_jj80mdj
G
Nice talk but he admitted that he has a literal vested interest in Google and Ge…
ytc_UgzLTd_wt…
G
Well AI are buult by people soooo 🤷♂️ qnd it has access to whats going on in th…
ytc_UgxEi8iVv…
G
The hodlers of Bitcoin kind of gave themselves a UBI.
It's wild that you ethica…
ytc_Ugztn1z-V…
G
I've literally had nightmares involving being stuck inside a self-driving car. T…
ytc_UgxMLUBFQ…
G
Now that u lot interviewed the godfather of AI Geoffrey Hinton, next up should b…
ytc_UgygsbETt…
G
That's wrong though. AI companies weren't idiots in the EU at least. They made t…
ytr_UgxmdQDPi…
Comment
I think Alex might want to study philosophy of mind and analytical philosophy to a greater extent to make future discussions with AI more appealing. This felt more superficial than it could have been. For example, reading Searle more closely would make much of these questions for Chat GPT defunct. For example, understanding that Chat GPT utilises sintax, not semantics. My phone doesnt need to know what an app is in order to initiate a program. Humans do this as well, but also utilise a Higher Order Thinking faculty, particularly when it comes to reflecting on meaning and value. We also have sesory experience which informs our reality, another factor which severely limits the insight this video seems to be claiming.
I finished the video - when Alex askes the AI to only respond in Yes and No, he reduces the dialogue to a binary. Its the tail wagging the dog when he tries to have a conversation about consciousness using fallacies suxh as false dichotomy and reductio absurdum.
Enjoyable nevertheless
youtube
AI Moral Status
2024-07-30T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxchB7J0p0B011_ucZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIpmZMRNXNfZPaqGp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"amusement"},
{"id":"ytc_Ugwy3COYfHSl9uA-u-F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwsv4bsl3FBkT4Tp7J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx13hLZ9p51Ec72vsB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdELNJdLE5jvrv-IB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzIrhq-Gf0KF0h98C54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"},
{"id":"ytc_UgyxPpWHwZ4V-u7Ljnd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxe2aYoDMTG9r665k14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwVjewGyPHD4vLYniN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]