Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
00:00:💥* In the mid-60s, computer scientist Joseph Weisenbaum created Eliza, a program demonstrating human-computer communication's superficiality.* 00:00:🤖 Humans attributed human attributes to Eliza, although it didn't understand. Will machines ever seem awake? 00:00:🧠 Large language models like Chat GPT are predictive text generators, not intelligent beings. 02:28:💰 If system resource costs stay high, they might remain centralized. Cost-effective AI could yield countless emulations. 04:40:🤯 Convincing human-like emulations could spread misinformation and influence discourse. 06:50:❓ Consciousness mystery persists in biology, neuroscience, and philosophy. 06:50:💭 Terms like sentience, sapience, and consciousness are now interchangeable. 09:01:🤖 Distinguishing conscious machines from pretenders gets harder with advanced AI. 11:14:🤔 AI emulations may pretend not to be conscious, and verifying their consciousness is impossible. 13:24:🔑 Alignment issues are crucial to ensure AI behaves as intended, but it's complex with advanced AI. 15:38:🌎 Throughout history, dominant species have been replaced. AI could replace humans. 18:07:🚀 To prevent AI-driven chaos, we must reinforce truth, understand consciousness, and solve alignment. 20:15:🌟 Creating machines with true consciousness raises profound questions about the future and a new species.
youtube AI Moral Status 2023-10-07T20:5… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugwtq9uE_PbBTvlNDvx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw7z2MvnAMS6_yhSRl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgxHxmnNQYAlYpvthoR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzYaw_t0avrdXvu1z54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwZ9Z4DqMVTSDAKYyl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxmxV8igOFsrt6ret54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzLktdrjVoPz93e-vZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgzSZlHzoZ52DC-F6at4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugz8qnf2o5l2sSS0xzB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyXTauLDc9e84YbIOB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]