My guilty secret is that I always check the SparkNotes summary of a chapter I just read to make sure I didn’t miss anything. I’d be surprised if an LLM could summarize chapter by chapter effectively without including information from other chapters. You could inject the entire chapter as context, but very few models have large enough context windows for that.
My guilty secret is that I always check the SparkNotes summary of a chapter I just read to make sure I didn’t miss anything. I’d be surprised if an LLM could summarize chapter by chapter effectively without including information from other chapters. You could inject the entire chapter as context, but very few models have large enough context windows for that.