ChatGPT: Is the AI tool ready to be your academic GPS, navigating you through your research?

"The world is changing as we speak; therefore adapting ourselves to these changes becomes ever more critical if we hope not to fall behind," says a software engineer 
Pic: EdexLive | (Assisting tool: Dall-e)
Pic: EdexLive | (Assisting tool: Dall-e)

OpenAI's ChatGPT is a subject of complex and intense debate among academicians, ranging from scepticism to awe. While some describe it as nothing more than a "dumb assistant" or an “editor” with the potential to improve efficiency, others view it as a powerful and potentially dangerous tool that could serve as a “double-edged sword.” When asked about its impact on academia, ChatGPT itself didn't express much confidence, admitting that it may produce fiction and have biases. Despite its reservations about its accuracy, ChatGPT believes that it can assist researchers in various ways. To shed light on this controversial tool once and for all, EdexLive spoke to numerous academics, scientists, and engineers who shared their insights into what exactly ChatGPT can do –and what one needs to watch out for. 

Bangalore University banned the usage of the AI bot within months of its release. Additionally, esteemed linguist Noam Chomsky criticised it for being akin to "high-tech plagiarism", arguing that relying on these technological advancements ultimately impedes genuine learning opportunities. Nevertheless, with artificial intelligence rapidly gaining prevalence across numerous industries worldwide including education- there is no denying its significant impact on transforming traditional academic approaches like never before. With today’s AI bots, one can generate essays, paintings, designs and music with just a few clicks. As Adrish Bhattacharya, a software engineer, rightly notes, "The world is changing as we speak; therefore adapting ourselves to these changes becomes ever more critical if we hope not to fall behind."

Unlocking new possibilities: Researchers using ChatGPT
Francis Madden, a trainee clinical psychologist at the University of Oxford and a part-time YouTuber, recently shared a video titled “Using ChatGPT to do an Oxford PhD: My experience so far”. The video has garnered over 1,23,000 views, where he demonstrates four ways in which he has incorporated the bot into his workflow. "It has forever changed the process of scientific research paper writing," he says in the video description. In the comments of the video, several other researchers have agreed, stating that the bot has the potential to revolutionise research and academia. One comment, authored by a user named Ben, reads, "I agree, as long as you are aware of both ChatGPT's and your limitations, then it is a game changer.” He goes on to provide an example from his personal experience in which he employed ChatGPT to condense no fewer than 45 psychology articles for use within an essay. “It provided me with a great overview of the literature, highlighted themes, and even suggested areas for further research to fill gaps in the literature. I have had moral and ethical concerns, but I view it as having an inexpensive, genius-level tutor to help me achieve my MSc and beyond." Another researcher pointed out that while the bot may not always be correct, with a little work, it can produce some great results. 

The Bing-enabled ChatGPT
Dr Harish Guruprasad, a faculty member of the Department of Computer Science and Engineering at the Indian Institute of Technology (IIT) Madras and an expert in theoretical and statistical aspects of machine learning, shares his insights on ChatGPT in academia, stating, "While it can generate seemingly plausible sentences, the bot's lack of knowledge of the truth can result in inaccuracies." While he strongly advises against blindly relying on summaries generated by chatbots, he notes that researchers can use Bing-enabled ChatGPT for research. He further explains, "The advantage of using a search engine like Bing-enabled ChatGPT is that it can lead you to credible scholarly articles for follow-up, which can be verified. Hence, Bing-enabled ChatGPT could potentially be a valuable tool for researchers, as it can provide faster access to information that would otherwise be challenging to find." 

In today's rapidly evolving society, Dr Guruprasad acknowledges the potential benefits of using summarisation tools for scholars who find themselves pressed for time. While it may not be inherently problematic to use such technology to consolidate several articles into a concise summary, he emphasises that it is ultimately up to the individual how they choose to utilise this bot. “If used negatively, or over-relied on, it might hinder the process of learning,” says Dr Guruprasad, underscoring the need for caution. 

"Opportunity for interdisciplinary research"
Beyond its capabilities in summarising, revising and accessing information, ChatGPT can be a valuable tool in interdisciplinary research. It enables scholars from different fields to easily access information, which is particularly helpful when they need data or information on specific terms outside their area of expertise. Dr Tapas Kumar Mishra, of the National Institute of Technology (NIT), Rourkela, recognises this, stating, “Say I need data from a different subject, like biology about terms that I have no knowledge of. I can’t read the whole book for that single term. In such a case, ChatGPT can aid the researchers tremendously.” This acknowledgement highlights just how significant this technology could be when it comes to facilitating collaboration between scholars working within different domains of knowledge.

The art of prompts and the academic revolution: ChatGPT as an editor and translator
Writing in English has been a persistent challenge for many researchers from South Asian countries. This issue can become a significant hindrance to their progress and could potentially prevent them from being taken seriously within the academic community. However, with the advent of ChatGPT - could this AI-powered tool potentially level the playing field for non-native English-speaking academics? Our discussions with various professors and academicians suggest there's definitely potential here. With its powerful language tools, ChatGPT can assist in polishing language to meet the standard of clarity and coherence that may be lacking otherwise. The bot can be utilised by researchers to identify grammatical errors, improve sentence structure, proofread, rephrase, translate, and more. "It is easier to learn how to use the tool than develop the skill of language itself," says Dr Mishra, emphasising the user-friendly nature of ChatGPT. 

In a similar vein, Dr Jasabanta Patro, Assistant Professor, Department of Data Science and Engineering, Indian Institutes of Science Education and Research (IISER), Bhopal, says, “Students who excel in analytics but face challenges in English can use ChatGPT to put together their research. This will ultimately improve their writing skills in the long term.” Dr Guruprasad cautions, however, that while polished language can enhance the presentation of research, it can also be used to hide underlying issues in research quality. "Good and convincing English should not be used to mask bad science," he warns, emphasising the importance of maintaining integrity in research.

“Writing should not be compulsory”
Taking it one step further, going beyond the conventional norms of traditional academia, Dr Mushtaq Bilal, a Pakistani postdoctoral candidate at the University of Southern Denmark in World Literature, questions the need for writing to be a compulsory subject. Known for simplifying the process of academic writing and providing tips on how to use AI apps for academic purposes, Dr Bilal is emerging as the leading voice for Ai-driven academic writing. In his tutorial titled "Become an Efficient Academic Writer with AI Apps," Dr Bilal has gained a significant following on Twitter, with hundreds of thousands of views and over 170+ slides being used by more than 1,000 academics.

According to Dr Bilal, his approach to teaching and academic learning is unorthodox. He says, "If I could, I would make writing optional. It should only be for those who want to learn how to write. We don't want every citizen to become adept in mathematics, right? So why do we want every scholar to learn writing?" In his view, compelling researchers to write does not necessarily translate into better quality work or more insightful analysis. Dr Bilal believes that writing should be taught only to those who are interested in learning it, and argues that forcing students to write is a waste of time and resources. In a passionate defence, he also emphasises that the idea of ChatGPT being a threat is "a made or a fake threat." According to him, the notion of cheating with AI emerges because students are being forced to write. He vehemently argues, "You are imprisoning them and teaching them what you want to teach and not what they want to learn." 

Moreover, the glaring absence of proper guidance within academia is a crippling issue which can be resolved through ChatGPT's capabilities - provided it is used appropriately. Drawing from personal experience, Dr Bilal highlights how applying for scholarships can become easier with bots like ChatGPT. He recalls his initial rejection from Fulbright in 2015 only to receive acceptance a year later due to having access to valuable guidance. However, in the present day, he mentions how these AI bots could potentially support researchers who lack clear direction or assistance. “I asked ChatGPT to imagine ten questions that might be asked in a Fulbright interview and I was asked seven out of those ten questions,” he reveals, thereby demonstrating the effectiveness of ChatGPT and how leveraging its strengths could potentially aid researchers. Nonetheless, he warns that using the bot itself is an art and if not done properly, it will not produce desired results. “The trick is, if you are not smart enough, you won’t be able to use it.” 

“Use ChatGPT to outsource your labour, not your thinking” 
Crafting a compelling prompt for ChatGPT is not merely an activity, but rather a subtle and refined art, or as Adrish Bhattacharya puts it, a "skill." Therefore, utilising ChatGPT as a tool in research or academia requires careful cultivation, understanding, and mastery. Dr Bilal, through his active presence on Twitter, generously shares his wealth of expertise in guiding researchers on how to effectively use ChatGPT. In his numerous guidance tweets on using ChatGPT in academia, Dr Bilal emphasises the importance of politeness and taking it slow when interacting with the AI. He provides insightful advice by suggesting that one should utilise AI to “outsource labour”, and not their “thinking.” Furthermore, he recommends using AI for creating structure while retaining authority over content creation. In the same tweet, he acknowledges ChatGPT as an exceptional research assistant but firmly believes that it cannot assume the role of a supervisor. 

Dr Bilal's insights on ChatGPT as a research assistant are both insightful and provocative. It is particularly noteworthy, due to his background in literature, where most academics have outrightly rejected the bot. According to him, “Those who aspire to pursue literature and writing despite their lower middle-class background with no guidance will inevitably acquire the necessary writing skills anyway; for others, it should not be necessary.” Dr Bilal's opinion offers an entirely new dimension towards traditional academia - thereby opening doors previously closed. 

Ensuring integrity: Hallucinations, biases and plagiarism with ChatGPT
Despite the numerous advantages that Chatbots bring to the table, there is no denying that these systems are not without their flaws. Dr Anupam Guha, an AI and AI policy researcher working at IIT Bombay, specialising in language and vision intersection argues about the unreliability of bots when it comes to research purposes. According to him, “ChatGPT will always give significant errors because anything based on machine learning is essentially a statistical model with probable errors." Additionally, he emphasises ethical considerations while utilising GPT for research as even minor misrepresentations could end one's academic career by damaging credibility amongst peers. “The cost of errors and misrepresentation of the truth can end an academic job. So, I will personally never use it without vetting it thoroughly and being very careful with what it produces.”

Dr Guha elaborates that while chatbots may work for fiction writing or non-rigorous tasks, their efficacy is limited. He says he has spoken with fiction writers who find that despite using chatbots, a significant amount of effort is still required to polish and correct what it produces. Sharing a recent social media joke he says, “Sure you can submit an abstract using ChatGPT. Only after you correct every single word written in that question and spend like three hours making it sound what you wanted it to sound.” In Dr Guha’s opinion, these challenges undermine any advantages provided by such tools, rendering them less effective at achieving end goals. In support of Guha's stance, a literary researcher from Heidelberg University, Germany, Department of South Asian Studies explains how AI stumbles when it comes to close reading. “Further it might stall opinion formulation and summation of self-reflection which are backed by lived realities and personal experience.” In addition to this, she sheds light on the significance of tonal quality which is pivotal in accurately capturing the ebb and flow of human emotions. “Using AI for research sounds dystopian to me, to be honest,” she says. 

Moreover, as a pattern generator, Dr Guha says, ChatGPT cannot be a resource for academic research since it does not understand what it generates. “If you give it 10 words, it can predict the 11th-word using patterns from the language that has been set to it. So, it generates text but has no inferential mechanism. It is like a parrot or a puppet which can say things but not understand them,” he says, explaining that the tool should not be confused for knowledge. “Confusing it for knowledge would be non-intelligent or dumb usage of the tool.”

The "hype" and the "hallucinations"
Dr Debanga Raj Neog, Assistant Professor of Data Science and AI at IIT Guwahati also expresses similar concerns about this phenomenon that are often referred to as "hallucination" in the AI world. This occurs when a bot produces incorrect or unrelated outputs to the input provided. “The content it produces sounds plausible but it cannot be validated,” he explains. Additionally, he highlights that current training protocols only account for ten languages and seeking results outside those ten languages can result in improper responses. “It will take a couple of years to get there, to incorporate more languages.”

There are also growing concerns about ChatGPT potentially engaging in plagiarism and exhibiting its own biases. "ChatGPT in academia is a double-edged sword," cautions Dr Patra, adding, “It can help immensely but can also accelerate plagiarism.” Despite its inability to perform critical thinking or high-level tasks, Dr Patra highlights the vast amount of data that ChatGPT possesses, cautioning that the reports and definitions it generates should be used carefully to avoid plagiarism if not properly cited. Another major concern is the macroscopic level plagiarism problem surrounding the company's profitability through summarising other people's work without proper attribution. As Adrish puts it aptly, "This giant tech company will make millions off someone else's work just because they have access to it- that sounds messed up." 

Nonetheless, Adrish argues that using ChatGPT as a tool is not equivalent to committing plagiarism as it necessitates skill and does not wholly compose one’s paper. “Using this tool effectively itself is a skill. It definitely won’t write the entire paper for you but you can use it to brainstorm or do things that might not require creativity.” He compares using ChatGPT with using a calculator and says, “It is a tool, and you need to know how to use it to make the best out of it. But yes, you cannot just copy-paste definitions and write-ups from it. It requires constant human interference.” 

The final concern with ChatGPT that has arisen pertains to its inherent biases. When asked about these biases the bot concedes that it does possess them. “The data on which I was trained reflects the bias and perspectives of the people who created that content. A significant portion of the internet content indeed comes from young white men, and that might influence the language and the way certain topics are presented,” it responds. The AI bot also explains at length the various types of bias that its training model might possess like gender, culture, confirmation, selection, availability and implicit biases. Concluding, it asserts the need to critically evaluate the results and insights it generates and says, “They (academics) should also consider using multiple sources and tools to ensure that their work is not overly influenced by any particular perspective or bias.”

When probed about the biases inherent in ChatGPT, Dr Mushtaq Bilal had a very interesting response as he astutely pointed out how the tool is not unique in its biases. “Of course, the bot has a definite preference for Western-centric knowledge, but so does Stanford, Harvard and even the immigration system of the United States of America (USA). No tool is without it, however, what we can do to deal with it is beware of it,” he remarks. To deal with the flaws of ChatGPT Dr Bilal suggests educating oneself on its limitations and then figure how one can use its capabilities to their advantage. To expand on this point, Dr Danny McDougall, Chair of Sign Language Studies at Madonna University, Michigan, USA, in a tweet responding to Dr Bilal, points out how it is a process and not a mere transaction. “Scholarship Via AI: Knowledge Generation:: Calf Implants: Track & Field,” he writes. 

As we delve deeper into discussions with various academics on this topic, one resounding message emerged - that the ultimate objective should always be centred on learning. “Over-relying on bots might stunt critical thinking skills - a vital mental exercise which must not be neglected. But if it is just a tool to increase productivity, then why not,” says Dr Mishra. 

The ‘artificial’ fear
The excitement surrounding ChatGPT is palpable, but it cannot be denied that an air of trepidation and unease looms over the academic community. Dr Anupam Guha believes that much of this apprehension has been artificially manufactured by the company behind ChatGPT itself. “The fear is a result of massively overselling the capabilities of the product,” he says. According to Dr Guha, the bot cannot do real reasoning that it claims it can do. He insists that these are not true and instead serve as a divergence from actual things that need to be feared. “The software’s machine learning can be used in surveillance. That is the sort of stuff you have to be fearful of,” he asserts. The professor also addresses concerns raised by colleagues about students using technology for cheating on assignments or exams. Anupam considers these worries as misplaced and says, “I told everyone that's not going to happen. The bot is not capable enough to do it.”

Knowledge is power: Educating students and staff about AI-generated content
As a solution to tackling the impact of ChatGPT and other AI tools on learning effectively, the academicians unanimously advocate for awareness. Dr Mishra, who encourages his students to use ChatGPT for assignments, says, “Accepting the existence of these tools is crucial in adapting to them successfully.” He emphasises that teachers must acquaint themselves with these tools' capabilities and limitations to prevent their misuse or exploitation. “Just be aware of what it is capable and incapable of; awareness is the key.” To deal with misuse, Dr Guruprasad takes another approach, assigning problem-solving tasks beyond the chatbot's capacity. “It can’t do complicated problem solving, you see” Another professor suggests offline assignments and exams. 

Dr Anupam Guha on the other hand says that he has been experimenting with ChatGPT with the homework he gives in class. “I try them on ChatGPT myself,” he says, adding, “The results were hilarious.” He found that he could tell why the machine produced the results it did and it was wrong in very specific ways for anything sophisticated. Dr Guha says he too had an open discussion on ChatGPT with his students and challenged them to try if they can. “I will quickly find out if they use ChatGPT because the questions are convoluted,” he states with confidence. 

“A part of evolution”
When it comes to addressing the apprehensions surrounding ChatGPT, Dr Neog doesn't mince words. He reminds us that fears accompany every new technology, but trying to stop it is futile. "Even if you try, it will be illegally published somewhere, and you won't be able to monitor it anymore," he warns. Dr Patro, on the other hand, takes us back to the early days of computers, when similar anxieties prevailed. "People thought computers would kill all jobs, but instead, they opened up new avenues and created better quality jobs, improving our lives. It's all part of evolution," he explains. Drawing on this comparison, Dr Patro further stresses that new policies around AI will lead to more regulation. Moreover, embracing new AI tools doesn't necessarily mean ignoring traditional methods; they can complement existing practices by enhancing productivity and efficacy. "It's all a part of evolution," as he eloquently puts it.

Other AI tools for researchers
When it comes to using AI tools for research, ChatGPT is just the tip of the iceberg. Several other highly efficient AI applications are far more effective in aiding researchers. Dr Mushtaq Bilal, who says “I always say that don’t use ChatGPT unless you know how the bot works,” recommends other AI-powered tools such as ‘Scite’. It is a tool that helps researchers check references. “It gives you smart citations by checking the accuracy and quality of references in the research paper,’ he says. However, ‘Scite' comes with a charge. For those seeking cost-free options, Dr Bilal suggests using ‘Research Rabbit’.This tool helps researchers gather materials and also provides them with visual connections between the articles.

Delving into the contents of Dr Bilal’s Twitter account, one can uncover a plethora of such powerful AI tools for research purposes. In one such tweet, he writes, “Don't use ChatGPT for academic writing. It's not designed for academic purposes and generates fake citations. Instead, install an AI-powered editor for academic writing inside your MS Word”. He continues the thread to suggest ‘PaperPal’ as his preferred choice. He also mentions ‘Scholarly’, an AI bot that can convert any research paper into a PowerPoint presentation instantly. With the increasing prevalence of these AI tools, it begs the question: will they become the new navigating tool for researchers in academia?

Related Stories

No stories found.