Images  éco-responsables

La compression des images réduit le poids des pages et leur chargement.

En savoir plus

Images  éco-responsables

La compression des images réduit le poids des pages et leur chargement.

En savoir plus

Rechercher dans
Séminaire SGS Brownbag

SGS Brownbag: Dirk Lindebaum, Grenoble School of Management

We need to talk about ChatGPT! On why the management educators must resist that siren song Dirk Lindebaum and Peter Fleming

Publié le 01 sept. 2023
Lieu
Extranef, EXT 110
Format
Présentiel

With ChatGPT being promoted to and by academics for writing scholarly articles more effectively, we ask what kind of knowledge is produced by ChatGPT, what this means for our reflexivity as responsible management educators/researchers, and how a lack of reflexivity disqualifies us from shaping management knowledge in responsible ways. Our essay, therefore, unpacks the epistemological limitations inherent in ChatGPT. We urgently need to grasp what makes human knowledge distinct compared to knowledge generated by ChatGPT et al. We first detail how ChatGPT ‘works’. With a nod to Kant’s Critique of Pure Reason, we then argue that ChatGPT produces bad knowledge and relies on ‘lazy reasoning’ given that it is acontextual, irresponsible and lacks human reflexivity. Using high-probability choices that are derivative, ChatGPT has no stake in the knowledge it produces. By contrast, genuine human thinking – embodied in textual reasoning – uses low-probability choices both ‘inside’ and ‘outside’ the box of the training data, making it creative, contextual and committed. We conclude that the use of ChatGPT is wholly incompatible with scientific responsibility and responsible management, both of which require a firm commitment to human reflexivity in the production of knowledge and human judgement in the application of it.

Par


Intervenante(s), Intervenant(s)

Dirk Lindebaum

Grenoble School of Management

Organisation

Patrick Haack

Voir plus d'événements