Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good video as always, but you missed the part where you balanced the story, so here are my thoughts: We do not even have self-driving cars yet, despite many years of saying it is here (we will get there eventually). We are extremely far from the much more difficult general AGI. Sure, if you train an AI with data that says humans are a plague on the Earth that should be reduced in numbers and eat bugs, then the AI will be able to regurgitate such elitist claims, but it still has no idea what the tokens it is saying mean. It is not self-aware. We don’t even know if it is even possible to create self-aware machines outside of Science Fiction or dubious secret military whistle-blowers. Mainstream science says consciousness is a product of the electric and chemical reactions in the physical brain, and if that is true, then it is not unreasonable to assume that such processes can be recreated in silicon at some point. However, there are scientific studies of the non-physical properties of consciousness, but it is being knee-jerk rejected as pseudoscience without even taking a closer look, because “it cannot be possible”. They admit they never looked into it, and yet they assume to know the most: “The solution/understanding is just around the corner…”. It is the peak of arrogance, and an example of scientific cognitive dissonance, and I hope more will start to realize this is not pseudoscience, but testable science, see e.g. “The Force of Habit: New Tests for Morphic Resonance”: watch?v=Gz_4Xy24Tkw Until we know more about consciousness, we are very far from being able to recreate it. It is difficult to make a chocolate cake if we don't know all of the ingredients 🙂
youtube AI Governance 2023-07-08T17:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzoUiS778YpR_Koxfl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwz0yIj9LXJKpZVGx54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxR_j6EZTtlM8JN4K54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugys0CW0SI82kRs-A2x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwSQqxeKn9mYdC0Gcx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyVeilORauroDf4z4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxn5qu2fKzRSyRtWn54AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgypmLaLkHhbXWNAcG14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7uMaJl1s-Fsb0gxt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzh3AAnnpjuGBctotJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]