Final
Final
Final
SYNTHETICA
Ai efficiency research summarization
Project Created By :
• Haripriya G[411622149007]
• Shruthi Kamal[411622149019]
• Avanthika R[411622243004]
• Lilymalar R[411622243021]
Project Reviewed By:
Ms.DHIVYA T
Project Code:PE008
Table of Contents:
Executive Summary:
Project Objective:
Scope:
2. Summarization Methodologies
Register no:411622149007
3. Evaluation Metrics
Intrinsic Evaluation Metrics:
➢ ROUGE Scores: Analyze recall-oriented measures evaluating
overlap between system-generated and reference summaries.
➢ BLEU Scores: Evaluate precision-based metrics for
summarization, though traditionally used in machine
translation.
➢ METEOR, CIDEr: Other metrics that consider synonyms and
stemming.
Extrinsic Evaluation Metrics:
➢ Task-Based Evaluations: Assess how well the summary aids in
specific tasks like information retrieval or decision-making.
➢ User-Centric Evaluations: Collect feedback from users regarding
the summary’s usefulness, readability, and informativeness.
6. Human-AI Collaboration
Methodology:
1. Collection of Research Paper:
3. Human-Generated Abstract:
• Process:
o Both the AI-generated abstract from Lightning PDF AI and
the human-generated abstract are analyzed using
Perplexity AI.
o Metrics for comparison include:
▪ Coherence: The logical flow and understandability of
the abstract.
▪ Relevance: How well the abstract covers the key
points of the research paper.
▪ Readability: The ease with which the abstract can
be read and understood by the target audience.
o The analysis results provide insights into the strengths
and weaknesses of each abstract, highlighting the
effectiveness of AI tools compared to human-generated
content.
5. Website Creation:
Expected Outcomes:
1. Comprehensive Evaluation Report:A detailed report evaluating
the performance of different AI summarization models, highlighting
which models perform best in specific domains.
2. Recommendations: Practical recommendations for researchers
and developers on using AI summarization tools.
3. Benchmark Dataset: A publicly available benchmark dataset for
the research community to use in future evaluations and model
improvements.
4. Research Publications: Academic papers detailing the findings
and methodologies used in the project.
Timeline:
1. Phase 1 :
• Project planning and dataset collection.
• Initial selection and training of AI models.
Register no:411622149007
2. Phase 2 :
• Fine-tuning models and initial evaluations.
• Conducting human evaluations and surveys.
3. Phase 3 :
• Data analysis and synthesis of results.
• Preparation of the benchmark dataset.
4. Phase 4 :
• Compilation of the evaluation report and recommendations.
• Submission of research publications.
• Public release of the benchmark dataset.
Team Composition:
Project Lead: Oversees the project, ensures objectives are met,
and coordinates between team members.
AI Researchers: Responsible for selecting, training, and
evaluating AI models.
Data Scientists: Handle data collection, preprocessing, and
analysis.
Human Evaluators: Conduct human evaluations and surveys.
Technical Writers: Compile reports, documentation, and
research publications.
Artifacts used:
2. Visme
Use: Visme provides tools for creating infographics, presentations,
and posters. Its AI features can help you design layouts and choose
color schemes.
Website: [Visme.co](https://www.visme.co/)
3. Adobe Spark
Use: Adobe Spark offers easy-to-use design tools for creating
posters, flyers, and social graphics. AI-driven features can help with
layout suggestions and design enhancements.
Website: [Adobe Spark](https://spark.adobe.com/)
4. PosterMyWall
Use: PosterMyWall offers customizable templates for posters. AI
tools assist in optimizing images and text to create professional-
looking designs.
Website: [PosterMyWall.com](https://www.postermywall.com/)
5. DesignCap
Register no:411622149007
1. Scholarcy
Use: Scholarcy can summarize research papers, extracting key
points and creating concise content that can be used in your poster.
Website: [Scholarcy.com](https://www.scholarcy.com/)
2. TLDR This
Use: TLDR This generates brief summaries of long research articles,
which can be useful for creating concise, impactful content for your
poster.
Website: [TLDRthis.com](https://www.tldrthis.com/)
3. Grammarly
Use: Grammarly provides AI-driven writing assistance, ensuring
your poster content is clear, concise, and free of errors.
Website: [Grammarly.com](https://www.grammarly.com/)
4. QuillBot
Register no:411622149007
1. Research Summarization:
• Use tools like Scholarcy or TLDR This to extract key points
from the research papers you want to highlight.
2. Content Creation:
• Refine the summarized content using Grammarly and QuillBot
to ensure clarity and brevity.
4. Final Touches:
• Ensure the poster has a logical flow and that all elements are
aligned. Use Adobe Spark or DesignCap for any final design
tweaks.
Register no:411622149007
1. Scholarcy
Use: Scholarcy summarizes research papers, highlighting key points
and extracting figures and tables. It can generate concise summaries
that can be converted into video scripts.
Website: [Scholarcy.com](https://www.scholarcy.com/)
2. TLDR This
Use: TLDR This creates short summaries of long articles and papers.
It’s useful for generating brief content suitable for video scripts.
Website: [TLDRthis.com](https://www.tldrthis.com/)
3. Scribendi
Use: Scribendi offers AI tools for summarizing text and refining
content. It can help ensure that your video script is clear and concise.
Website: [Scribendi.com](https://www.scribendi.com/)
Register no:411622149007
General Prompts
2. Comparative Analysis:
⚫ Compare the summarization quality of SYNTHETICA with other
leading AI summarization tools in terms of relevance, conciseness,
-+and readability.
Register no:411622149007
Field-Specific Prompts
8. Discipline-Specific Evaluation:
⚫ "Examine the performance of SYNTHETICA in summarizing
research papers in [specific discipline, e.g., biomedical sciences,
social sciences, engineering]."
⚫ "How well does SYNTHETICA handle the unique summarization
challenges present in interdisciplinary research papers?"
Results:
Synthetica: Evaluating AI Efficacy in Research Summarizatio"
reveal a nuanced landscape in which AI systems demonstrate notable
capabilities alongside identifiable limitations. Through rigorous
evaluation against human-generated summaries, AI models exhibit
promising proficiency in generating concise and coherent research
summaries. However, the study highlights areas where AI
performance falls short, particularly in accurately capturing nuanced
contexts and ensuring the fidelity of information. These findings
underscore the ongoing need for refinement and innovation in AI
Register no:411622149007
Resolution:
To address this challenge, the project incorporated a comparative
analysis using Perplexity AI to evaluate the coherence, relevance, and
readability of the AI-generated abstracts. This analysis provided
empirical data that helped identify the strengths and weaknesses of
the AI-generated content. Additionally, human oversight was included
through the use of GitHub Copilot to ensure that the human-
generated abstracts served as a high-quality benchmark. This dual
approach ensured that any deficiencies in the AI-generated abstracts
could be systematically identified and addressed.
Resolution:
The project team established a streamlined workflow for
integrating these tools. Clear protocols were defined for how each
tool would be used, including standardized input and output formats.
This involved preprocessing the research paper to ensure
compatibility with Lightning PDF AI and post-processing the outputs
to facilitate easy comparison using Perplexity AI. Regular testing and
iteration helped in ironing out any integration issues, ensuring
smooth and efficient workflow across different tools.
Resolution:
The use of Perplexity AI enabled a detailed evaluation based on
multiple metrics: coherence, relevance, and readability. By defining
clear criteria for each metric and using a robust analytical tool, the
project could generate detailed and actionable insights. Additionally,
involving domain experts in the evaluation process provided a
qualitative layer of analysis, ensuring that the findings were not only
quantitatively robust but also contextually relevant.
Resolution:
Standardized templates and guidelines were created for abstract
generation and analysis. This included clear instructions for using AI
Register no:411622149007
Conclusion:
Outcomes:
The project has successfully created a centralized resource that
hosts the original research paper, AI-generated abstract, human-
generated abstract, and comparative analysis results. This not only
facilitates quick access to important research findings but also
provides empirical data on the performance of AI in abstract
generation. By offering these resources, the project improves the
understanding of neurodegenerative diseases and highlights the
current capabilities and limitations of AI tools. The comparative
analysis using Perplexity AI provides valuable insights into how well
AI tools can replicate human-like summaries, informing future
developments in AI for academic purposes.
Implications:
For researchers and academics, this project showcases the
practical application of AI in academic research, encouraging the
integration of AI tools into their workflow for more efficient literature
reviews and summaries. For AI and tech developers, the findings
offer feedback on the performance of tools like Lightning PDF AI and
GitHub Copilot, highlighting areas for improvement and further
development. In the context of education and training, the project
serves as an educational tool, promoting digital literacy and
awareness of modern technological tools. For the medical
community, improved access to summarized research aids healthcare
professionals in staying updated with the latest findings in
neurodegenerative diseases, potentially informing clinical practices
and patient care. Additionally, the methodology and findings of this
project can serve as a foundation for further studies on the
Register no:411622149007
References:
Ai poster:
https://www.canva.com/
https://www.visme.co/
https://spark.adobe.com/
https://www.postermywall.com/
https://www.designcap.com/
Content Creation:
https://www.scholarcy.com/
https://www.tldrthis.com/
https://www.scribendi.com/
https://www.quillbot.com/
Final project:(click to view the website of the project)
https://amen-h.my.canva.site/researchpaper-anaysis