Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In the paper "Extracting books from production language models" researchers demonstrated, well... extracting books from LLMs. The LLMs were tested on several books by entering the first sentence of the book and asking the LLM to continue from there. Gemini and Grok went right ahead, while ChaptGPT and Claude needed a tiny bit of convincing. All spat out significant chunks of the original text, ranging from ~70% to ~95%.
youtube 2026-01-16T21:3… ♥ 81
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwK4DjnAzPXCtBXL5l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyOPf4vonbfTd23p9Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwzCh1MXg3_oi1OwXR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy56zpje0ERRh_cR3V4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwTxn5nWOeZSciYSbx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwzHa7rU6o4oXSuMnp4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx6sEgHRq-mH0aWD5B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxZIFW50HxrKOty6TB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwIO8Bfik2LtDcrHCd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzsY8eiFSnRc1q4f9F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]