Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think super intelligence Will never be realized in the form we currently think of it purely because of extreme Capital constrains. Besides the whole field might just be an illusion after all Apple's Machine Learning Research team just put out a paper 🗞️ https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
youtube AI Governance 2025-06-16T22:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxgGa-GwbHM7UPWOGF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyj9M4wI6L-f9bvyxN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJ_-EzAEaCTRTyaUB4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgymND7Ep_kVEFF_iIV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzqtvxRoYtL85JOsB14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyUK0ANWNww_ShCwdJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugwet04hm66O3mMvXoJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyo5CJpdcYCrgZ5_NZ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyrv-4VQDp3lXxuTml4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwtgZOkac5hB2payBN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]