Stephanie, You have underscored an issue that is far more pervasive than just education. In health care, doctors used to develop a relationship structure WITH EACH PATIENT based on that individual patient's characteristics, how that intersected with what the physician knew/understood, and how both intersected with the current (and accumulated) patient problems. I still begin every relationship with a new patient by telling them "You are your own science experiment. I know lots about diseases and other patients that have had problems similar to you. But I guarantee you will not be the same and figuring out how that is true will guide us in trying to make things better".
But today's rubric is checklists (most completely unrelated to anything important and essentially selected because they are "easy to measure" NOT because they are relevant to this particular patient/physician/disease axis. This has reduced physician thinking by 90% (the covid debacle is the most recent and obvious example, but examples abound) as people just followed the rules rather than doing the right thing. (And, sadly, sincerely believed that following the rules WAS doing the right thing...even though almost all of the rules were wrong.)
When you are going to drive things by computer rather than by thinking, this is the end result -- the end of thinking. AI is not intelligent in the least -- just correlative. So that further reinforces pattern matching to some desired pattern...not seeking the truth (which AI would not recognize anyway).
It is not clear what should be done about any of this, but it is not good for science, education, health, or, for that matter, life.
because it is so hard to fnd holistic - where in the world are you?
Stephanie, You have underscored an issue that is far more pervasive than just education. In health care, doctors used to develop a relationship structure WITH EACH PATIENT based on that individual patient's characteristics, how that intersected with what the physician knew/understood, and how both intersected with the current (and accumulated) patient problems. I still begin every relationship with a new patient by telling them "You are your own science experiment. I know lots about diseases and other patients that have had problems similar to you. But I guarantee you will not be the same and figuring out how that is true will guide us in trying to make things better".
But today's rubric is checklists (most completely unrelated to anything important and essentially selected because they are "easy to measure" NOT because they are relevant to this particular patient/physician/disease axis. This has reduced physician thinking by 90% (the covid debacle is the most recent and obvious example, but examples abound) as people just followed the rules rather than doing the right thing. (And, sadly, sincerely believed that following the rules WAS doing the right thing...even though almost all of the rules were wrong.)
When you are going to drive things by computer rather than by thinking, this is the end result -- the end of thinking. AI is not intelligent in the least -- just correlative. So that further reinforces pattern matching to some desired pattern...not seeking the truth (which AI would not recognize anyway).
It is not clear what should be done about any of this, but it is not good for science, education, health, or, for that matter, life.