Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans are conscious. AI, at least for now, is a computational model — specifically, a massively mulitvariate and latent non-linear regression, calibrated efficiently to volumes of data samples. As it is in physics and other scientific disciplines, sometimes even economics, such computational models can be useful for approximating reality when they are well-specified and robustly calibrated. However, the model is not reality— it’s a useful approximation, and only in the best possible circumstances. Similarly, a binary computing machine is not consciousness. Like the physics model, it can be useful in explaining/interpolating the data it’s calibrated to. It can be insightful or at least entertaining in predicting/extrapolating beyond the data it’s calibrated to. But in the end, it’s just a computational model that is imitating the data it’s fed. Second, the AI (that is, computational model) is calibrated to volumes of data. These data are represented with binary number systems: letters are “characters”, words and sentences are strings”, colors are integer-valued three-tuples called RGB, and sound is a seven-tuple. Images are sequences of RGB, songs are sequences of sound tuples, movies are sequences of images (all this before compression techniques). Though color can be approximated usefully by a three-tuple, in reality color is more complex than just three numbers. But these are just useful approximate data representations of reality. With consciousness it’s even worse because we have no data model for it: In fact, we don’t even know what consciousness is, or how to define it clearly. How can we even get a computational model to approximate something we can’t even define? Certainly not by feeding it volumes of data.
youtube AI Moral Status 2024-02-28T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwSphK8OB-Ofj-X57B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxMtJ5cR8dkN6qKQ_B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_Ugy6C8lFrFYq6wVi7R94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgyTx6bhSBkXYUAX3yZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzYLQ-ipGiupXTX4ux4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgwKIuV0TNOs6bkRY-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_Ugy2o7nu4WyNuVDDCEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxXM93u1j8mt3K2WVZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgzeOCqzr_3rOn01izd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx2pHFvUTQSUB-j1zJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}]