Czerniewicz on Teaching and Learning

This brief post will explore the thoughts of Laura Czerniewicz around the topic of the future of teaching and learning, particularly relating to the importance of contestation and equality.  Czerniewicz is Associate Professor at the University of Cape Town and the director of its Centre for Innovation in Learning and Teaching.  In her recent podcast presenting her concerns for the future of teaching and learning, she acknowledged that no aspect of them is neutral – it is all political and ideological.  When we teach something to someone, no matter the topic, we are always coming at it with our own points of view that colour how we present it.  When we learn something, we always perceive it in light of our own views and pre-existing ideas.

I think that this is what Czerniewicz means when she explains that teaching and learning are never neutral.  However, I believe that this notion of always approaching what we teach or what we learn in light of our own viewpoints is what enables us to be critical.  We link what we present or what we hear to what we already know, trying to assimilate it into our pre-existing world view – whether it backs it up or is at odds with it.

For Czerniewicz, the politics and ideology of teaching and learning are particularly pertinent because of the country she lives in.  As she says in her podcast, in South Africa, the wealth of two people equals that of half of the entire population.  Immediately, issues of access and affordability in technology come to the fore.  If there are varying levels of access to learning through the internet, wealthier people with more access will have more opportunity to be critical and engage with ideas and information which conflict with their own viewpoints than the poor.  This is why what Czerniewicz insists upon regarding open access to knowledge is so important and why she vouches for open education in her podcast.  The more open access to knowledge and the less internet access is hierarchical, the easier it will be to attain educational equality through the participation of technology in teaching and learning.


Pass the Butter

Despite having had one for 25 years, I have absolutely no idea how the brain works.  How can I imagine places and situations in my head?  Which part of my head is that in?  How does what my eye sees get sent to my brain?  Where inside my brain is my memory stored?  Why is it that one day I can’t remember a specific fact eg. the name of someone, but then the next day I can remember it straight away?  Surely the information was stored there the entire time?

The brain is definitely the most mysterious and complex organ – let’s have a think about how much we know, with our 7 billion collective brains, about the medulla, pons, hypothalamus, cerebellum and hippocampus that we carry around with us.

HowStuffWorks, the website which does what it says on the tin, explains:

Your brain, spinal cord and peripheral nerves make up a complex, integrated information-processing and control system known as your central nervous system. In tandem, they regulate all the conscious and unconscious facets of your life.

Kind of sounds a bit like the internet doesn’t it.  In last week’s blog, the individual trees supported each other’s survival in the forest through the mycorrhizal networks.  I wonder if, as the internet’s inter-connectivity becomes ever more complex, it can serve as a sort of forest network connecting our human brains together in an increasingly sophisticated system that augments the individual brain’s amazing power.  This has profound potential impact for the future of learning as we know it.

While the internet could expand how we use our brains by facilitating collaboration, on another note, as AI and machine learning will be essential in the future of learning, it seems important to discuss their relationship with consciousness.  After all, that’s the one thing that distinguishes humans and machines, isn’t it – consciousness.  Humans have emotions, the ability to think for themselves, to disagree, to experience all range of feelings – guilt, joy, determination, passion.  The robots that we have today don’t have this capability.  If you ask Amazon’s Alexa intelligent personal assistant its opinion on something, she often replies ‘I don’t have an answer for that.’  Its purpose is limited to a source of information.  I think this is the main thing that will set the human brain apart from artificial intelligence in the years to come – we have consciousness, the ability to think for ourselves.

In an episode of the sci-fi comedy cartoon Rick and Morty entitled ‘Something Ricked This Way Comes’, Rick, a Doc from Back to the Future­-esque scientist, invents a small, two-armed robot.  When it poses the question, ‘What is my purpose?’, Rick replies ‘Pass the butter’.  The robot completes its task within a few moments and then repeats its question: ‘What is my purpose?’  When Rick clarifies that its only purpose is to pass butter, it is left depressed, contemplating the inane meaning for its own creation.

‘What is my purpose?’

The robot becomes self-aware, knowing the limits of its own existence, and this produces crushing futility.  It has no ability to expand beyond the confines of Rick’s parameters to the extent that later when Rick is lonely and asks it if it wants to go and see a movie, it replies ‘I am not programmed for friendship.’

The difference between this robot and the human brain is that humans have consciousness.  They have the ability to process the world around them, to encounter new experiences and new relationships, learn from them and adjust how they live their lives accordingly.  Humans’ consciousness has no programmed confines like Rick’s butter robot.

If, in the future, artificially-intelligent machines do develop this consciousness, then I’ll concede that machine learning is on par with that of the incredible human brain.  Until then, they can pass me the butter.


HowStuffWorks – The Brain:

Rick and Morty – Pass the Butter:


‘Plant Elephants’ 🤔

Peter Wohlleben, author of The Hidden Life of Trees, has a theory that trees have different personalities and even communicate with each other via ‘the woodwide web’:

We think about plants being robotic, following a genetic code. Plants and trees always have a choice about what to do. Trees are able to decide, have memories and even different characters. There are perhaps nicer guys and bad guys.

My first thought in reading about his ideas is that Wohlleben is a nutter.  After all, he does call trees ‘plant elephants’.  Surely if trees were sentient, they would have uprooted the human race and seized control as the superior species by now?  But then I wonder if Wohlleben’s whole aim is to anthropomorphise trees so as to increase our empathy with them, therefore promoting conservation.

Suzanne Simard is a professor of ecology at the University of British Columbia who investigates how trees support and communicate with each other.  In her TED talk, she describes experiments she conducted where she found that Birch and Douglas Fir trees were able to support one another’s survival by transferring carbon through mycorrhizal networks – fungi pathways which transport carbon, nitrogen and useful compounds between plants:

It turns out at that time of the year, in the summer, that birch was sending more carbon to fir than fir was sending back to birch, especially when the fir was shaded. And then in later experiments, we found the opposite, that fir was sending more carbon to birch than birch was sending to fir, and this was because the fir was still growing while the birch was leafless. So it turns out the two species were interdependent, like yin and yang.

 She goes on to claim that Douglas Fir trees can even recognise their own kin, sending them more carbon via these mycorrhizal networks than they do to other trees.  They even reduce the space their roots take up to give their offspring more room.

trees communcicate

It turns out that Wohlleben is not such a nutter after all!  Trees do not communicate with one another per se but have a system of interconnectivity in place whereby they support one another and collaborate for survival.

In his recent seminar for the Future of Learning EDU8213 seminar, Professor Sugata Mitra spoke about self-organising systems and the notion that everything is connected in some way.  He used the analogy of a droplet falling into a lake and the reverberations that spread throughout every water molecule in the lake to show the diffusing impact that changes in the world have on everything around them.  This intriguing research into the trees’ systems of connectivity is the ideal illustration for Mitra’s thoughts on the nature of the Internet.

Suzanne Simard describes the forest as a single, self-contained organism in which its occupants can sustain one another through sharing information and collaborating together.  If the Internet is to be a tool for expanding and progressing learning in the future, it can work in this same way in which learners are positioned within it, sharing information and learning together through the pathways the Internet provides, wherever they are in the world.  In the forest, this sharing via the mycorrhizal networks achieves the survival of trees who are in danger through shading or damaging insects.  Similarly, in the future of learning, the Internet as a collaborative network can protect those who are vulnerable through the sharing of learning and the proliferation of knowledge for all.


Artificial Intelligence – More Dangerous than North Korea?


In a recent episode of Celebrity Hunted on Channel 4, a series in which surveillance and secret service professionals use any means necessary to track down participants on the run, the hunters accessed the search history of one of the fugitives’ Amazon Echo ‘intelligent personal assistant’ Alexa.­­  Although Alexa was not able to give out much information on Made in Chelsea star Jamie Laing’s plans to evade capture, the extent to which the hunters were able to hear back the questions posed by Laing to Alexa with such ease was alarming.

The potential for all-encompassing surveillance of the population is one cause for concern but billionaire businessman and investor Elon Musk warns that AI itself poses considerable risk to the future of humankind, not just in terms of how it could be misused by humans.  Following an AI bot’s defeat of two of the world’s best ‘Dota 2’ players after just two weeks of training, Musk tweeted ‘If you’re not concerned about AI safety, you should be.  Vastly more risk than North Korea.’  Although this is somewhat sensationalist, Musk’s calls for regulation in the field of AI are based in reality: ‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated.  AI should be too’.

Mark Zuckerberg, on the other hand, offers a totally contrasting perspective.  In a Facebook Live session earlier this year, Zuckerberg predicted:

In the next five to ten years, AI is going to deliver so many improvements in the quality of our lives.  One of the top causes of death for people is car accidents still and if you can eliminate that with AI, that is going to be just a dramatic improvement.  Whenever I hear people saying AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used.  But people who are arguing for slowing down the process of building AI, I just find that really questionable.  I have a hard time wrapping my head around that.

Zuckerberg’s comments are somewhat circuitous and dismissive; I find Stephen Hawking’s comments on the subject to be more measured and thoughtful: ‘The real risk with AI isn’t malice but competence.  A super intelligent AI will be extremely good at accomplishing goals, and if those goals aren’t aligned with ours, we’re in trouble.’

In terms of the future of learning, the danger with AI is that, as Hawking says, it becomes so advanced that our goals (promoting education, furthering learning outcomes) do not match up with those of the AI (its own omniscience, its own omnipotence or even just its need to experience the world).  If that were to happen, human learning would go out the window.  This is explored in the 2015 science fiction thriller Ex Machina, in which search engine company CEO Nathan Bateman (Oscar Isaac) attempts to measure whether his humanoid AI robot Ava (Alicia Vikander) is capable of true thought and consciousness.  Without revealing too much of the plot, I’ll say that the film serves as a warning of the possibility of artificial intelligence putting its own wishes before that of its creators and I thoroughly recommend it!

ex machina

If AI is to take a central role in the future of education, regulation is vital, as Elon Musk states.  However, the question remains – who is going to take charge of this regulation?  Governments?  Does this not leave us open to the dangers explored in last week’s post relating to Baraniuk’s ‘concerns over the aggressive enforcement of police state policies online, which allow oppressive regimes to censor media or spy on their own citizens with increasing ease’?  Perhaps the presence of AI could be used to ensure limitations on this dangerous power posed by totalitarian regimes.


Summer Morning or Winter Evening?

During our most recent Future of Learning class, we enjoyed an online lecture from Bhargav Vasavada, an expert in cyber risk and cyber security for Deloitte in Houston, Texas.  He discussed the relationship between user convenience and risk.  As software becomes more sophisticated and user convenience increases (eg driverless cars), the risk to the user increases (eg instances of hackers seizing control of the car’s system and causing an accident).  In his BBC Future article ‘What Will the Internet Look Like in 2040?’, Chris Baraniuk offers two scenarios – one utopian and one dystopian – for where he feels the Internet will take us in the next 25 years.  The first goes like this:

It’s a summer morning in 2040. The internet is all around you and all the things that you’re about to do during your day will fall in to place thanks to the data streams flying across the internet. Public transport to the city dynamically adjusts schedules and routes to account for delays. Buying your kids the perfect birthday presents is easy because their data tells your shopping service exactly what they will want. Best of all, you’re alive despite a near-fatal accident last month because doctors in the hospital’s emergency department had easy access to your medical history.

Here, user convenience has increased to such an extent that even the act of thinking is obsolete.  Data-driven efficiency pervades every aspect of life and all decisions are made for us thanks to the Internet’s comprehensive record of everything.  But at what cost?  As Vasavada explained, if we embrace this information-dominated utopia, the risk is considerable.  Take Baraniuk’s second scenario, which is somewhat darker in outlook:

It’s a winter evening in 2040 and the world is a darker place. The internet is teeming with cybercrime and it’s become impossible to go online without making your bank account vulnerable or risking identity theft. Trolls have taken over social networking, the web is ludicrously priced and segmented, meaning only the rich can access the most useful and up-to-date resources. If that wasn’t bad enough, in some countries people’s every move is constantly monitored by secret police using networked sensors and internet communications. Even if you can get online, would you want to?

The reality is that we can’t have the summer morning in 2040 without the winter evening.  How can we allow AI the freedom to maximise user convenience without allowing those with nefarious motives to make the most of the anonymity and surveillance the Internet provides?

These ideas were explored by science fiction writers such as Isaac Asimov and P.K. Dick before the Internet even existed, in the mid-twentieth century.  Asimov’s Three Laws of Robotics were first introduced in his 1942 short story Runaround and are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


Asimov’s robots make life more convenient for mankind but only because the risk created by this convenience (robots wreaking havoc and seizing power) is mitigated by the laws that limit the robots’ capacity to put their own desires and ambitions before those of humans.  P.K. Dick’s tale Minority Report creates a world where crime is totally wiped out thanks to the ability of Precrime (predictive policing system) to predict where and how crimes happen before they take place.  A new technology that appears to form a crimeless utopia compromises ethics as people are prosecuted for crimes they have not committed.


If the Internet is to be used as a tool for increasing convenience and efficiency in 2040, the risk posed by what is described in Baraniuk’s dystopian scenario must be alleviated, just like the Laws of Robotics lessen the risks posed by the robots.  But should the Internet be regulated?  Won’t this lead to a surveillance-driven society akin to that of Minority Report, where people are punished for crimes that they might commit in the future?

In last week’s post, I explained my realisation that the majority of my learning in 2017 occurs via the Internet.  In the years to come, this will only increase as Internet access becomes effortless and all-encompassing.  The nature of the Internet – that any information is accessible to anyone at any time – is a major weapon in the fight against educational inequality.  Learning is not just possible for those who can access high-quality schooling but for anyone who can use the Internet.  All of a sudden, choice about how and what we learn is open to all, not just to those who are wealthy enough to eschew the government’s prescribed curriculum in favour of that which is offered in a prestigious private school.  If the state were to have the power to impose regulation on the Internet, this weapon that levels the playing field would be rendered impotent, as governments imposed their own curriculum and agenda on what could and could not be learned via the Web – or even worse – monetised the Internet so that certain levels of access were only available to those who could afford it.  Baraniuk alludes to ‘concerns over the aggressive enforcement of police state policies online, which allow oppressive regimes to censor media or spy on their own citizens with increasing ease.’  If we want to ensure that the risk resulting from user convenience is mitigated, we must consider the possibility that in mitigating that risk, the regulating body may enforce their own oppressive agenda.

So what is the solution?  How can the future of the Internet be protected to be used to expand the horizons of the most vulnerable in the world?  If only Isaac Asimov and PK Dick were still around to tell us.


Chris Baraniuk, 2014:

Isaac Asimov, Runaround, 1942

PK Dick, Minority Report, 1956

Luther’s Guide to Personal Learning Networks

When our tutor James first tasked us with creating our own PLN (Personal Learning Network), I had never heard of it before and kept misremembering it as a ‘Private Learning Network’.  However, having mapped out all the different ways in which I learn as an adult, it’s clear that learning is anything but private!  People learn in very different ways – for example, I would much sooner read a book visually than listen to an audio version but I have friends who are very keen readers who exclusively read by listening.  However, the aspect of learning that unites everyone is that it’s a very public, shared process.

Today is the 500th anniversary of Martin Luther nailing his 95 theses to the door of All Saints’ Church in Wittenberg, an act of defiance that led to the beginning of the Reformation and a dramatic turning to evangelical Christianity across Europe.  Luther had previously been a devout Roman Catholic monk but railed against the Roman Catholic Church’s teaching on indulgences (paying money to spend less time being punished for sins) after reading a part of the Bible called Romans – ‘For in the gospel the righteousness of God is revealed—a righteousness that is by faith from first to last, just as it is written: “The righteous will live by faith’ (Romans 1:17).


Luther’s whole belief system about the universe was shaken to the core – by reading.  Following his stand for his radically transformed beliefs, Emperor Charles V of the Holy Roman Empire issued a decree declaring Luther an outlaw, made it a crime to give him food or shelter and gave anyone permission to kill Luther without any legal repercussions.

Martin Luther was unable to keep his seditious views to himself because he believed that they needed to be shared.  He knew that when one’s opinions are uprooted by a new perspective, this is an open, collaborative process from which others can benefit.  If we keep our thinking to ourselves, we’re not doing anyone a favour and our learning is a waste of time.  We should nail our colours to the mast – even if the Emperor makes our assassination fair game.


*Not this Emperor*

So how does the Personal Learning Network fit into all this?  When I made my PLN using (see below – thanks Carmel for pointing us all to this), I realised that everything I use to learn points back to technology.  All the ways in which my understanding is challenged – others’ blogs, Youtube videos, reading articles, Twitter – all come to me through the medium of technology.  This gives me a more hopeful perspective on the future of the Internet’s use for learning.  There is a lot of scope for it to be used to challenge authority, just like Luther did 500 years ago today, instead of Internet-led subjugation of the masses.

So let’s all be Martin Luthers today.  If you find something new and interesting that changes your views drastically, share it with others – online or otherwise.

Jack personal learning network


The Internet of Brains

Imagine a classroom where every child is connected to the Internet inside their head.  A teacher would have no way of knowing whether they knew that Ben Nevis was the highest mountain in the UK or whether they had searched it using their internal connectivity.  How could the teacher assess and differentiate their lessons?  But the real question is – what’s the difference between the child knowing the fact and looking up the fact?  If she can find something out immediately without the teacher knowing that she didn’t know it before, is that the same as her already possessing the knowledge?

What’s to stop the class from communicating with one another, or someone on the other side of the world, via instant messenger during an exam?  What’s to stop them projecting the latest binge-watch Netflix show onto their retina during a French lesson?  The idea of in-built Internet access presents some problems in traditional schooling but, taking a more optimistic perspective, if all the facts and knowledge were available for instant access, could that free up more time for deeper, creative and evaluative skills to develop?

These are all questions that come to mind following Professor Sugata Mitra’s recent seminar for the ‘Future of Learning’ module at Newcastle University.  To prepare for this post, I asked a local teacher what they thought about this possible development in technology and their reaction was very sceptical.  I asked whether they thought instant access to knowledge and facts could push education away from a didactic curriculum towards a heavier focus on analysis and synthesis skills but they pointed out that it is not just factual knowledge that is widely accessible on the Internet; certainly within subjects such as English or History, much analytical work could be regurgitated from websites such as SparkNotes.

Personally, I find Professor Mitra’s speculative predictions for the future of learning to be incredibly fascinating but somewhat concerning.  The human brain is a totally amazing piece of engineering – it comprises millions of interconnected pathways, much like the Internet:

brain illustration

It is astounding in the knowledge it can retain, the wisdom it can offer and the split-second judgments it can make.  Human error is a deep and profound source of learning and constant connectivity would diminish this.  If I can calculate 8×7=56 instantly with an online calculator, I can never get it wrong and I can never develop reasoning skills to understand that 0.8×70 also equals 56, 0.08×700 also equals 56 etc.

Amanda Morin argues that memory and recall is hugely important in securing progress, particularly when supporting children with learning difficulties who struggle to retain instructions.  She explains that ‘Kids rely on both incoming information and information stored in working memory to do an activity. If they have weak working memory skills, it’s hard to juggle both.’  A future over-reliance on the Internet to the extent that using one’s memory becomes obsolete would result in a dearth of memory and recall skills, creating barriers to children engaging with challenging and stimulating tasks.

Moreover, a recent study (Madore et al, 2015) shows that a strong emphasis on memory is directly linked to creative skills: ‘Episodic memory supports ‘mental time travel’ into the future as well as the past, and indeed, numerous recent studies have provided evidence that episodic memory contributes importantly to imagining or simulating possible future experiences.’  Those with the ability to recall clear and vivid memories of past experiences can utilise that skill to envisage new, original ideas that have benefits for all.

Therefore, if we want to support our children in becoming the blue-sky thinking Sugata Mitras of the next generation, we need to prevent the clouding of their cognitive development resulting from farming out their thinking skills to websites.  Let’s not limit the human mind to the confines of the web.


Jacob Dubé, ‘The Dangers of the ‘Brainternet”, 2017 (

Greg A. Dunn Artwork, (

Kevin Madore, Donna Rose Addis and Daniel Schacter, ‘Creativity and Memory: Effects of an Episodic-Specificity Induction on Divergent Thinking’ (2015, Psychological Science, Vol. 26)

Amanda Morin, ‘5 Ways Kids Use Working Memory to Learn’ (

A Warning for Dinosaur Fans

In his 1958 dystopian short story Examination Day*, Henry Slesar tells the tale of the Jordan family and their son Dick1.  They live in a society where every child, when reaching their twelfth birthday, must take a mandatory government exam.  On his birthday, Dick’s mother has an ‘anxious manner of her speech’ and he is confused by ‘the scowl of his father’s face’.  When it comes to sitting the exam, Dick is given a milky liquid to drink which his father explains will ensure that he answers the questions truthfully.  The exam begins and the narrative cuts to Mr and Mrs Jordan, who await a phone call nervously in their living room.  Mr Jordan answers and the emotionless voice explains ‘This is the Government Educational Service. Your son, Richard M. Jordan, Classification 600-115, has completed the Government examination. We regret to inform you that his intelligence quotient has exceeded the Government regulation, according to Rule 84, Section 5, of the New Code.  You may specify by telephone whether you wish his body interred by the Government or would you prefer a private burial place? The fee for Government burial is ten dollars.’

This is an unsettling tale of totalitarian policy limiting the intelligence and learning of the population and while it is highly dystopian and distant from reality, every day thousands of gifted children are denied the opportunity to dive deeper into their learning and reach their potential due to a government’s prescription of what and how they should learn.

Curricula aid teachers in forming the structure of their teaching and ensuring a logical flow of lessons but if, as in the UK, a curriculum is heavily influenced by subject knowledge and facts that must at all costs be retained, then teaching will naturally lean towards didactic learning and knowledge memorisation.  This has its place but open-ended questioning, investigative learning and encouragement to research independently are fundamental in supporting higher-ability children in meeting their potential.  If these do not form part of lessons, our education system is not much different to the policy of intelligence-limiting present in Examination Day.

Jean Piaget (1896-1980) was a Swiss clinical psychologist who first proposed his constructivist theory of cognitive development in major works such as The Language and Thought of the Child (1923) and The Origins of Intelligence in Children (1936).  Piaget theorises that a child builds understanding by interacting with its environment. Learning happens through assimilation (fitting new information in with pre-existing understanding) and accommodation (when new information does not fit and the prior understanding needs to be adapted).  According to Piaget, ‘Each time one prematurely teaches a child something he could have discovered himself, that child is kept from inventing it and consequently from understanding it completely’2.  Discovery learning is a process which grew out of this idea that a child’s understanding of a concept is more concrete if they have discovered it for themselves.  In discovery learning, a student is not provided with direct information or specific answers but with the necessary tools for finding a solution themselves.

This approach ensures opportunities for children to push the boundaries of their learning and embrace the challenge of gaining deep understanding of different topics.  It supports them in embracing failure as an opportunity to develop.  It protects children from the Examination Day-esque limiting of intelligence that is ominously present in prescribed, fact-based curricula.  That said, it presents some dangers.

If a child is left to their own devices, as Piaget would have it, to discover viewpoints, ideologies and thought processes, what is to stop them from embracing dangerous ideas?  If children have no guidance regarding right and wrong, no one to direct them towards true and honourable mindsets, how do we protect them from inventing for themselves morally questionable worldviews or even ultimately becoming the proponents of the totalitarian state?

In response to a task to make ‘the future of learning’ out of plasticine in a recent seminar, I made this Stegosaurus:


I justified it explaining that the future of learning will be self-directed and learners will dictate what to focus on according to their interests and aspirations.  I wanted to focus on exploring how to make a Stegosaurus out of plasticine so I directed my discovery learning towards that.  However, if we are serious about protecting our children and giving them as much opportunity as possible to grow into citizens who uphold what is right, teachers and parents must play a significant role as role models, counsellors, caregivers – even sources of knowledge and understanding.

*Special mention goes to my amazing English teacher sister Hannah for pointing me in the direction of this sinister tale!

1Slesar, H. (1958) Examination Day. In Schroeder-Davis, S. (1993) Coercive Egalitarianism: A Study of Discrimination against Gifted Children

2Piaget, J. (1970) Piaget’s Theory. In P. Mussen (Ed.), Carmichael’s Manual of Child Psychology

Privacy in Education: Thoughts on Safety and Progress

A current major focus on formal assessment in British education means that an enormous amount of data is collected on students’ academic progress, attainment and needs.  Schools spend a large proportion of their budget on subscriptions to software that stores detailed information relating to behaviour, attendance and personal details – all accessible to staff.  In my previous role as a teacher, I could check a student’s full name, address, next of kin and their educational needs just with a few clicks on a database, even if that child was not in any of my classes.  I could access all their previous test results and see the progress they had (or had not) made across the year according to formal assessment and teacher judgment, just by typing their first name and hitting enter.

This database technology is extremely useful for tracking progress, tailoring lesson plans to meet students’ needs and maximising learning opportunities.  Spreadsheets accessed on the school drive, which show success on each individual question on a maths paper across the whole class, enable the teacher to immediately identify common errors and significant knowledge gaps.  Sharing attendance records and behaviour reports amongst staff can be instrumental in ensuring a student’s safety by revealing patterns that provoke cause for concern.

I believe that this is the most effective argument for maintaining records on children and sharing them between agencies.  One case that demonstrates the importance of this is that of Khyra Ishaq, a seven-year-old girl who tragically starved to death in 2008 following neglect and abuse at the hands of her mother and her partner1.  Hilary Thompson, chairwoman of the Birmingham Safeguarding Children’s Board which conducted an inquiry into her death, explains that ‘Khyra’s death was preventable. The report identifies missed opportunities, highlighting that better assessment and information-sharing by key organisations could have resulted in a different outcome.’  A lack of communication of knowledge between agencies meant that patterns were not identified until it was too late when centrally-shared records and reports of professionals’ interactions with Khyra and her family would have prevented what happened.

Audrey Watters raises the issue of vulnerability in education and how much right children have (if any) to their own data and personal details2.  Is there a danger that all this data mining could have sinister as well as beneficial outcomes for students?  To what extent should children be able to dictate what happens with records of their own academic progress?  Cory Doctorow adds to the debate the issue of OFSTED and schools being held accountable in much the same way as businesses are with inputs measured against outputs3.  School leadership teams in the UK put huge emphasis on achieving as high an OFSTED rating as possible and as this aim is largely met through performance on tests such as the SATs, significant priority is given to data used to inform progress reports or annual targets.

Data-sharing is paramount, not only in boosting academic progress and identifying gaps in learning but especially in safeguarding.  This leads me to believe that this should be prioritised over Doctorow’s concerns relating to over-testing and the pressures that the government puts on schools.  However, I sympathise with Watters’ views on allowing children more control over how data is gathered, used and accessed.  This would depend on a child’s age but perhaps school leadership could be more transparent with children and parents in explaining which data is stored, how long for and how it impacts positively on teaching and learning.

In conclusion, as usage of technology will only become more and more prevalent in assessment, future learners should be aware that their personal data plays a major role in securing their safety as well as their progress in learning.  Future educators should consider how they can continue to make the most of data collection but also how they can include learners in the decision-making process relating to how this data is stored and shared.

1’Starved girl Khyra Ishaq’s death ‘was preventable’’, BBC News article (July 2010).  Available at: [accessed 10th October 2017].

2Watters, A. (March 2015) Audrey Watters: Privacy and Trust in Open Education Part 1.  Available at: [accessed 10th October 2017].

3Doctorow, C. (March 2015) Cory Doctorow: Privacy and Trust in Open Education.  Available at: [accessed 10th October 2017].