
























































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
We aim to provide basic insights into some of the various aspects of digital transformation as a basis for further exploration. They tackle the digital-self, ...
Typology: Study Guides, Projects, Research
1 / 64
This page cannot be seen from the preview
Don't miss anything!
BLUE LINES
Digital Self Part of the reader “Smart City, Smart Teaching: Understanding Digital Transformation in Teaching and Learning.”
Author: Nils-Eyk Zimmermann Co-edited by: Ramón Martínez and Elisa Rapetti Copy-editing: Katja Greeson Design: Katharina Scholkmann (layout), Felix Kumpfe, Atelier Hurra (illustration) Publisher: DARE – Democracy and Human Rights Education in Europe vzw., Brussels 2020 Editors of the series: Sulev Valdmaa, Nils-Eyk Zimmermann
The project DIGIT-AL – Digital Transformation in Adult Learning for Active Citizenship – is a European cooperation , coordinated by Association of German Educational Organizations (AdB) with DARE – Democracy and Human Rights Education in Europe vzw. (BE) Centre for International Cooperation CCI (IT) Education Development Center (LV) Jaan Tõnisson Institute (EE) Partners Bulgaria Foundation (BG) Rede Inducar (PT)
If not otherwise noted below an article, the content of this publication is published under a Creative Commons Attribution-Share Alike 4.0 International License. Supported by:
The project is supported in the framework of the Erasmus+ program of the European Commission (Strategic Partnership in the field of Adult Education). Project Number: 2019-1-DE02-KA204- The European Commission‘s support for the production of this publication does not constitute an endorsement of the contents, which reflect the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein.
Co-funded by the Erasmus + Programme of the European Union
A lot of curiosity and increasing concerns regarding digitalisation today have to do with its ‘engine room’ - the fascinating global infrastructure of the Internet, its enormous costs and hunger for energy, Big Data, AI, and the increasing economic value of digital platforms. In particular, the growth of new kinds of platforms, fuelled by digital business models successfully capitalizing on users, is a widely visible phenomenon of this new technological and economic configuration. Consequently, their users are at the same time subjects and objects of digital change. They experience the opportunities made available through new, platform-mediated forms of interaction, but also feel uncomfortable since they are also symmetrically affected in their role as autonomous subjects. The right to independent information, privacy and security are, from this perspective, not yet sufficiently respected in the digital sphere. The migration of substantial parts of working and communication processes to the digital sphere during the last decades is also simultaneously a benefit and a challenge. One aspect is technical mastery – access to current technology and the ability to use it in a competent way. A more fundamental aspect is that the “digital self” is completing people’s analogue identity. Their digital traces are accompanying people’s lives with related consequences for their various social roles as private subjects, employees and citizens. Feeling overtaxed by all the associated challenges and concerns is a bad prerequisite for learning and a bad basis for considering future personal and social decisions. It is high time for adult education and youth work to do something about this double-edged sword. In particular, adult citizenship education has a lot of experience teaching complex social issues and could transfer its methodology and approach to the topic of digital transformation. We know, for example, that nobody needs to be an economist to be able to co- decide on political decisions affecting the economy. We also are capable of understanding the social impact of
There is No Overly Complex Issue for Education
cars, despite very limited knowledge of automotive engineering. Considering that it is possible to acquire knowledge about digital transformation, could we not even enjoy learning about Big Data, robotics, algorithms or the Internet of tomorrow similar to the way we passionately discuss political issues such as transport, ecology, or democracy? We should not, however, be blinded by the technical complexity of the digital transformation. It is important that we pay more attention to the social dimension, the intentions behind a technology, exploring its effects and regulations. Although not familiar with all technical or legal details, most people intuit that it is ill-advised to give out personal information without consent. We suppose what the right to privacy should entail and what distinguishes conscious decisions from uninformed ones, and in our analogue world, we discourage the ”used car salesmen” of our society from taking unsuspecting customers for a ride. After all, most of us have experienced the discomfort of having been deceived as a result of not understanding the fine print. If we transfer this insight to a pedagogy of digital transformation, we must admit that we should also be willing to explore new aspects of the technical dimension such as data processing or the nudging mechanisms in online platforms. But that is not the only priority! The most important thing is that we know what our rights and ethical foundations are and how they relate to the new digital contexts and are able to act accordingly. These questions are not solely related to privacy and safety, as seemingly no aspect of social life is unaffected by digital transformation. Using this foundation, we might further explore the potentials and risks of digitalisation in context, assessing its impact. Personal rights, for instance, entail privacy issues, but digital transformation has also led to new opportunities for co- creating, better information, or involvement of citizens in decision-making processes. On this basis, we are then able to define the conditions and rules under which certain digital practices should be rolled-out or restricted. E lectronic communication has changed the character of human communication as a whole. There are fewer impermanent ideas or assertions that go undocumented, to later be searched and rehashed. This change is both positive and negative, for example from the perspective of an employee who may be judged based on past decisions which live forever online. Pedagogy might help people to better understand the risks and benefits associated with electronic communication. In addition, it will be a creative challenge to imagine the technology we want to develop as a society and what will help us to initiate social, economic and cultural changes in the future. In this regard, it is also important to develop a view towards the so-called ‘skill gaps’ and ‘digital gaps’ people may face when mastering digitalisation. What is the purpose of defining a gap; for whom is the gap relevant; in whose interest is it to argue the risk of gaps as opposed to their benefits?
About the Digital Self
For generations, people have done many things in order to extend their abilities or consciousness. Even before the invention of the term “wearable”, we have used tools like glasses, watches, walking sticks, steel helmets, hearing devices and wheelchairs or used mind-altering consumables. Extending our bodies and connecting ourselves with others through such tools has influenced the imagination of the self and of the human body’s abilities. The question of how digitalisation instigates changes to our body, our social identity and our self-image is becoming apparent for adult and lifelong learning. This chapter describes the conditions and aspects constituting a digital identity. One important aspect is the machine-human relationship and its underlying constructive conditions. Another is the identificatory aspect of digital technology – the tension between privacy and identifiability (and for whom), and also we need to explore mechanisms of exclusion and inclusion. Therefore, digital transformation has an impact on the ideas of privacy and autonomy and how they might be achieved in the digital social reality, especially under the conditions that big data create. The second part tackles the question of how the exposure to and embeddedness in digital interaction affects the abilities and attitudes of us as individuals. On a personal level, these are health or performance issues, but on a social level, the question is raised of whether quantification and datafication influence key assumptions in regard to democracy like pluralism, individualism, inclusion, or the ability to innovate.
reality and as such, in principle, relevant for any specific field of education, any subject, or pedagogy. Together we might work on a broader understanding of what digital literacy is and explore as educators and learners in lifelong learning processes how it affects our lives. With a strong aspect of democracy and human rights in lifelong learning, we should lay the foundations for a democratic digital transformation and empower learners to find a constructive and active position in this transformation. We aim to provide basic insights into some of the various aspects of digital transformation as a basis for further exploration. They tackle the digital-self, participation, the e-state, digital culture, media and journalism and the future of work and education. In each of the publications we also present our ideas as to how education might take up this specific topic. You may access, read, copy, reassemble and distribute our information free of charge. Also, thanks to digital transformation (and the Erasmus+ program of the European Commission) we are able to publish it as an “Open Educational Resource” (OER) under a “Creative Commons License” (CC-BY-SA 4.0 International).
Into the Internet
of Everything
As we become more accustomed to devices and digital services, digitalisation is changing our imagination of the body and is influencing our perception of autonomy. In particular, our imagination of humanity is enshrined in the human body and our biggest concerns are about safeguarding its physical inviolability, dignity, and opportunities to move and to participate. The Internet of Things (IoT) is no longer limited to surrounding devices like intelligent plug sockets, fridges, automotive board computers and factory robots. Wearables and also implants have now “become social actors in a networked environment” (Spiekermann, 2010, p. 2). The coexistence of more and more apps and of more and more devices around us makes the vision of ubiquitous computing more realistic. Ubiquitous Computing describes the 21st century technology as embedded technology. In an Internet of Everything, the machine is spatially no longer separated, for instance in big metal boxes in specific rooms. In the words of digitalisation pioneer Mark Weiser in 1991, a lot of our devices today are more or less “invisible in fact as well as in metaphor”. They are small, and we don’t recognize them as computers although they technically are. Their value lies in their intuitiveness and connection: “The real power of the concept comes not from any one of these devices; it emerges from the interaction of all of them” (Weiser, 1991, p. 98). Digital assistants like Amazon’s Alexa, Google Assistant and Samsung’s Bixby are good examples that have brought ubiquitous home computing to a new scale: They are always on and monitoring their environment including the beings around them, communicating independently with the services behind them. We no longer experience “stupid” machines that sense environmental data and send it to other machines. More and more, they actively accompany us. When objects become subjects through their interaction with humans, they acquire an identity much different from the serial number engraved on the back. Because they relate to us and have influence on our (self) perception, one key question in this chapter asks how the human-machine interaction contributes to a shifted perception of our self and enriches our analogue identity – what we call the digital self. Beyond interaction, construction is another aspect helpful for understanding the
question in regard to our digital identities is how individuals might meet providers and creators at eye-level. Especially when the Internet of Things and big data come into play, the condition for our interaction is the active involvement of computing power somewhere outside our private sphere. Coming back to the personal digital assistants, we can also phrase it as such: the price for intuitive and individualized computing is reliance on external infrastructure – and also interception. Technically these devices need to always be in stand-by mode which allows them to cable to their, mostly external, home. A key word activates the process, which privacy activists like the founder of the German association Digitalcourage, Padeluun, criticize. In his criticism of Alexa during the Big Brother Award ceremony in 2018 he described that “the device eavesdrops 24 hours a day in my apartment, always lurking for me saying ‘Alexa’. As soon as it ‘hears’ this, it is going to record the following sentences and send these to the Amazon cloud servers in order to analyse them. My text is going to be translated here, analysed, and actions are then triggered remotely” (Digitalcourage, 2018). Although most owners of such a device trust in the discretion of the services behind the devices, they raise new challenges. The collected information is going to be saved for longer (if not permanently) not on our individual property but on servers of the service providers which allow them to analyse the collected data afterwards and use it also for other offerings. Second, it is not only algorithms that interpret the information collected through digital services and assistants. The public was informed in 2019 about Amazon letting employees transcribe some Alexa sound snippets. In some cases, conversations were recorded even if the trigger word indicating the activation was not said (Day et al., 2019). In reaction to the Amazon scandal, Microsoft had to admit too that they had intercepted some Skype calls, in particular those where the “intelligent” Skype translator was offering automatic translation. These human interventions and interceptions are, from a technical point of view, necessary due to lacking intelligence of technology. Humans need to correct and to step into the automatized processes “manually”. Even David Limp, a leading manager at Amazon agrees with this conclusion and is demanding to disenthrall the myths behind Artificial Intelligence (unfortunately only after the scandal). “As a sector, we perceived
Five biggest concerns against digital assistants
**1. Data abuse through the company 33%
Gain of Intuition, Loss of Overview When we recognize the digital sphere as an environment , which according to Merriam Webster is “the circumstances, objects, or conditions by which one is surrounded”, then a key aspect of digital identity is that persons, services and devices create together a technical-social environment, consisting of devices and infrastructure, apps and data traces. Everybody could be perceived also as the creator of a unique (which also means increasingly unique to identify) app and data ecosystem, that must be managed and mastered. As previously mentioned, the smartphone is the most distributed digital wearable. It has merged different functions that formerly would have been assigned to different devices like navigators, mp3 players, laptops and watches. New devices like smart watches and fitness trackers have also emerged in recent years. The apps relevant for the body focus in particular on workout/shaping, weight, pregnancy/menstruation, fitness-tracking, movement/maps, and food/cooking. In connection to an always-on tracking device like a smart watch, it is possible to track the body in a simple way. Furthermore, fitness apps are nudging and motivating people to follow health-related goals. Most apps, however, are very generous with data. 16 of 19 fitness apps are, according to a test of German consumer protectionists, “already sending data to third parties (analysis/PR) before consumers have accepted the terms of service and have been informed about the processing of their data” (Moll et al., 2017, p. 21). The integrity of our digital identity relies on how carefully and confidentially others treat it.
Five biggest advantages
**1. Quick access to information and search, in example Wikipedia 39%
it as normal that all customers know how artificial intelligence functions. Every such application includes manual checking: For instance, navigation apps are as precise as they are today, because people look and check the routes driven by the users for accuracy. The sector should have communicated this more clearly” (Kapalschinski & Rexer, 2019). This is particularly relevant as people have privacy and integrity concerns as a study from the German lobby organization for digital economy demonstrates (BVDW, 2017).
Source: BVDW, 2017
this could be seen as a question of regular checking and cleaning, another issue is that updates initiated by the services might change fundamental conditions or functions. In particular, the terms of service, ownership and privacy-related adjustments might change unidirectionally through new roll outs. Moreover, the explanations in the terms of service are not helpful for gaining increased clarity. Unlike in the pharmacy, you will most often not find an informative product insert enclosed in digital devices and services. Beyond the smart mobile devices, the smart home should also be mentioned as a part of the environment of this Internet of Everything. It includes not only digital assistants, but the whole collection of digitalised technology in between our four walls. Currently in many countries, water, heating and electricity consumption is measured by smart meters. Manual documentation is not required anymore as the data is transmitted automatically to the supplier. As an added value, digitalisation allows better tracking and analysis. For instance, one can assess the consumption in much more detail than only once or twice a year in aggregated form. In 2019, Google, Amazon, and Apple joined forces to establish a new standard for the Internet of Things: “Connected Home over IP”. In 2020, apps and devices for the monitoring and steering of light, heating and plug sockets appeared on the consumer market, as well as connected kitchen machines and fridges. Vacuum cleaners are drawing detailed ichnographies and storing them in a cloud. On average, each household has ten connected devices, but the tendency to adopt these tools is rising because of the availability of more available and affordable smart home technology (Bitdefender, 2016). More and more tablets, smartphones, TVs, consoles and eBook readers with smart features are complementing or replacing the desktop computer. Connected entertainment and data storage solutions are appearing in our households. Media servers (hard disks with internet access) are replacing spacious CD and DVD collections. Storage disks and network printers are, thanks to their connectivity,
Ubiquitous Computing: A technological vision of many, often small, and very differently connected computing devices, deeply embedded in our daily routines, interacting intuitively with us and with each other. Internet of Everything: Computed devices for different purposes, of different sizes and with different abilities interact with other devices (Internet of Things), and with the surrounding space through our facility- installed technology (Smart Home), and the social environment. Tracking: Recording personal data constantly over a certain time period and drawing information out of it.
accessible for different users in our home network or even from outside our home, through the internet. Cameras are becoming connected to smartphones, PCs, or printers, over Wi-Fi or via memory cards with Wi-Fi modules. Finally, the picture would not be complete if we did not mention the ”datafied” infrastructures outside our four walls. Sensors embedded in our public life measure pollution, noise, traffic and lead to improved management and maintenance of these infrastructures. But also personal information is collected and analysed, for instance by license plate or facial recognition. The combination of such infrastructural information with personal data is making new forms of ubiquitous computing imaginable. This is the narrative of the smart city. The discussion about smart infrastructure is oscillating between visions of horizontal and open data on the one side, and a commodification of public infrastructures through IT, often in private ownership on the other side.
The more common they are, the more invisible and intuitive technology and processes become, with an ambivalent effect on people’s knowledge and awareness of them. Stocktaking can be a starting point for their control. On average, ten connected devices are part of each household.
➡ How many devices do you connect via your router or mobile? ➡ How many meters are digitized in your household?
Between 30% and 40% of users never updated firmware or initiated security updates.
➡ And you? ➡ Many people don’t know, how that would work. Do you?
You might control your router through a website (the user interface). For example, you might see how many devices are online and how much data volume you used and when. Here you might as well initiate updates and change passwords or set new accounts.
➡ Have you ever had a look into this backend? ➡ When did you update your router last?
Your Self in Your Digital Home
measuring the body’s activities and sharing this data. And other brain-machine interfaces (BMI) are usually not embedded in intensive secondary datafication processes, this might change with their dissemination. In line with a trend toward automatization, industrial robots are also developing new features toward better interaction. The technological trend hints in the direction of more ubiquitous robotics. The International Federation of Robotics assumes that sensors and smarter control will make robots more cautious or collaborative, no longer fenced in cages for safety reasons (IFR, 2020). Although such collaborative robotics are still only a small part of the worldwide installations, there is a huge potential for broader dissemination of this technology: “Rather than a large-scale full-automation, the ease of being able to easily incorporate robots into people‘s work environments as they are is no longer just a large benefit to large companies: It also opens up the possibility of using robots in small to medium-sized enterprises (SME) – often in the form of semi-automation” (IFR, 2019). A feasible future scenario is that robots in industry and in services will accompany human activity to a greater extent, learning from interactions with individuals – requiring an archival of personal data of co-workers and sharing of said data with algorithmic systems. Clausen et al. help us think about responsibility as human-machine-interaction becomes more ubiquitous “A semi-autonomous robot directly linked to and interacting with a brain makes the source of an act difficult to identify” (2017, p. 1338). A necessity for reliability of such technology is the human ability to control the action of the device or the action triggered by the device. Therefore, the authors advocate that “any semi- autonomous system should include a form of veto control” (Clausen et al., 2017). In particular, the risk of manipulation of body- machine interaction must also be taken into account. Basically, robots might make unexpected moves from the perspective of their coworkers or their security mechanisms might be turned off. In particular, the risk
Brain-machine Interfaces (BMI): Electronic connection between brain and computer. Invasive: Implanted in the body through a medical surgery. Non-invasive: No break in the skin and (temporary) damage to the body. Brain-hacking: Manipulating the mental processing, thinking or perception through BMIs or through blocking or manipulating the functions of BMIs Right to the Integrity of the Person: “Everyone has the right to respect for his or her physical and mental integrity.” Article 3 of the Charter of Fundamental Rights of the European Union (CFR)
of manipulation is high in regard to BMI: “However, development of advanced sensors, allowing brain activity to be recorded at higher spatial resolution, coupled with advances in machine learning and artificial intelligence, could substantially enhance BMI capabilities in the near future and overcome the input-output constraint. This could enable more in-depth ‘mind-reading,’ i. e., classification of brain states related to perceptions, thoughts, emotions, or intentions” (Clausen et al., 2017, p. 1338). Especially the indirect manipulation through influencing the connection between human and device becomes a feasible technical scenario. Why learn to manipulate an implanted chip if you could just turn it off? “For example, neurally-controlled robotic limbs used to compensate for the motor deficits of amputated patients are potentially vulnerable to mechanical destruction by malicious actors, which would deprive the users of their required motor abilities” (Ienca & Haselager, 2016, p. 3). Luckily, brain-hacking by input manipulation (false input values), measurement manipulation (inexact measurement results), decoding and classifying manipulation (mistakes in interpretation) or feedback manipulation (when manipulated feedback signals trigger wrong actions) is more difficult. However, such machine manipulation would open new opportunities for enabling people to use their brains and regain autonomy and also for direct manipulation, limiting their autonomy: “The same neural device (e.g. the same BCI) has the potential to be used for good (e.g. assisting cognitive function in neurological patients) as well as bad purposes (e.g. identity theft, password cracking and other forms of brain-hacking)” (Ienca & Haselager, 2016). The society needs to hedge the manipulation opportunities drastically as they have a bigger damage potential than other forms of influence, since autonomy and freedom of action and perception are at stake. In particular this risk is disparately higher for vulnerable groups, for instance in hospitals, militaries or prisons. The exponential risk would need to be limited by stronger specifications for privacy, control and integrity by design. “If the Charter of the Fundamental Rights of the European Union is claiming in Article 3 a right to integrity (’Every person has the right to physical and mental integrity‘), then the conclusion is […] that there cannot be an unauthorized access to the brain” (Meckel, 2018, p. 232). Users need also to rely on the integrity of other connected and surrounding devices and on the integrity of the services acting in their intention. These services should not be allowed to spy or change functionality “behind users’ backs”, whether by stopping support of a heartbeat in the case of a pacemaker nor by sharing data with others. Integrity must also consider actions related to user intention and interest, particularly important in legal cases, when the question arises of whether personal data could be used against an individual (Lobe, 2019, p. 85 ). To whom are the producers of devices loyal if my right is vis-a-vis others? Legal privileges are foreseen for trusted persons in the analogue world. Lawyers, priests, and doctors are bound to confidentiality and discretion. What kind of loyalties do we need to legally bind other actors in the digital world?
16% of the world population have a disability. In regard to digital transformation, can they expect a golden age? I would say yes and no. The issues for people having disabilities are manifold. On the one hand, there are dysfunctional limitations that come with a disability. In regard to this aspect, new technology offers to offset these functional limitations to a greater extent than before. In that regard, new technologies promise new inclusion especially for people with certain physical disabilities. On the other hand, the issues for people with disabilities are not only functional ones. One of the biggest problems that they have beside functional limitations is the stigma and stereotypes leading to structural and psychological disadvantages that they face. Stereotypes and stigma are at least equal problems for inclusion, if not even more compared to the functional limitations. That being said new technology is a promise to offset functional limitation. Several prosthesis’, wheelchairs that can climb stairs, artificial eyes for blind people or cochlear implants for people that lost hearing are examples. When it comes to stigmas, the idea that the majority of society needs not to change to make society more inclusive, the idea is oppositional, that we have the technology that we might strap on to the disabled persons’ body and technology and make the disability disappear. In that sense, the disability would become a burden and responsibility of the disabled person – and not of the majority society. Thereby, stigmas that exclude do not change.
You mentioned in your study a shift in the perception of disabled persons, also in pop culture. Looking at Jaws in James Bond, modernity was no longer reproducing the stereotype of the old veterans with the simple wooden prosthesis. How does this shift impact the majority’s perception of disabled people? The most common stereotype that people with disability face is the so-called paternalistic stereotype. They are seen by others as what we call ‘warm but incompetent’. The two core dimensions of stereotyping are first, how warmly people perceive others from certain groups, and the second is competence or how well people put their intentions into action. Old people and people with disabilities are seen as warm but incompetent. That is why we offer them help, which in so doing we signal to the person that we perceive them as less competent. What technology can do is to offset this stigma. Modern system devices stand for technological advancement. There is also a weird pop culture discourse happening
Disabled or Cyborg?
A Social and Technological Challenge
Interview with Bertolt Meyer, Technical University Chemnitz (Germany) professor for organizational and economic psychology.
between the Transhumanist movement that portrays technology as a tool to overcome the limitation of the human body, and prosthetic devices. And suddenly we have a new generation of prosthetics and assisting devices that signal anything but incompetence. We find in our study that such people are almost perceived as able-bodied. Bionic prosthesis not only have a functional benefit to their wearers but also a psychological one. But again this reduces the stigma to a functional problem. The stigma does not need to change, and that is far away from my idea of an inclusive society.
Activists for inclusion, like Raul Krauthausen, shift the focus from the discourse about the disabled person to the social discourse on disability. They hold the majority responsible for lowering barriers. Could technology lower social barriers? It seems we are experiencing the effect of technology lowering barriers in early stages. Subtitles in TV shows and on Netflix were initially used as an aid for people with hearing issues, But now others also appreciate subtitles as as they make life easier, for instance enjoying content when you have no headphones with you on your mobile device while on in public transportation. Or we can think about accessibility technology in buildings, originally envisioned for people in wheelchairs but also benefitting the elderly. It’s clear that technology makes things barrier-free, making things easier for everyone. The trend in this direction comes with more technology on the way. Making things accessible makes life nicer for everyone. As this does not lead to singling out people and to othering, I fundamentally agree with the development. I agree with Raul because he says he is fed up with the request to change people’s mind before more inclusion could happen. What he says is, the other way around works: Making inclusion happen forces changes on the majority of society.
When looking at the possibilities of technology, to which aspects should we raise more attention? The discourse reduces disability to certain limitation of the body. Coming back to a quotation of Hugh Herr, the MIT professor amputated below the knees with bionic legs he developed himself: ‘I don’t see disability, I just see bad technology’. But the central barrier to inclusion is not the inability of the disabled body, it’s how the disabled bodies are treated by the mainstream of society on the basis of unconscious biases and systematic discrimination.
What should education address or do better? First of all, we need to create environments where people are forced to meet and col- laborate with people different to themselves. People need experiences with difference. Where better to create such experience than in educational settings? Learners need to appreciate things as normal that are rare or uncommon. We assume that things that are frequent would be normal and good. But we need to appreciate that bodies and