Publication Status
IJIFR © 2025

Licensed under CC BY-NC-SA 4.0

CC BY NC SA
Today
Monday, May 04, 2026 04:30 PM
You are visitor Number:
508094
Recently Publish
Paper ID IJIFR/V13/E9/001 Page No.: 1401-1406

Subject Area Computer Engineering

Authors Shaik Muqthiyar
S. Manjunath Reddy

Abstract IndicTrans AI is a Transformer-based neural machine translation system engineered to bridge the communication gap between English and Hindi, one of the most widely spoken languages of the Indian subcontinent. With more than half a billion Hindi speakers across India and the global diaspora, the demand for accurate, efficient and computationally feasible automated translation systems remains substantial. The proposed system implements a lightweight yet structurally complete encoder-decoder Transformer architecture, trained on curated English-to-Hindi sentence pairs and deployed through a Streamlit web interface that makes neural translation accessible to non-technical users. The system is implemented entirely in Python using PyTorch as the deep-learning framework. A SimpleTransformer model class encapsulates two embedding layers, an nn.Transformer encoder-decoder block configured with a model dimension of sixty-four, two attention heads and single encoder and decoder layers, and a linear output projection. The model is trained using the Adam optimizer with cross-entropy loss over fifty epochs, while source and target vocabularies are built from the corpus through word-level tokenization with reserved indices for padding and unknown tokens. Trained model weights and vocabulary mappings are persisted using PyTorch state-dictionary serialization and Python's pickle module, enabling efficient reuse across inference sessions. The Streamlit-based web application provides an intuitive two-column interface in which users enter English text and receive the Hindi translation in real time; output words are reconstructed from predicted token indices via a reverse Hindi vocabulary mapping. IndicTrans AI demonstrates the feasibility of Transformer-based machine translation at a compact scale, providing a foundation for extending coverage to additional Indian regional languages and incorporating more sophisticated training regimes and larger multilingual corpora.

Keywords Neural machine translation; Transformer; English to Hindi; PyTorch; Streamlit


Paper ID IJIFR/V13/E8/061 Page No.: 1359-1364

Subject Area Artificial Inteligence

Authors Karan Sundriyal
Afzal Khan
Akshat Gupta
Sonali Maurya

Abstract The ability to interpret spoken language in real time functions as an essential tool for breaking down language obstacles that exist in our modern globalized society. The existing multilingual communication systems that use digital media and video conferencing technology face difficulties because their performance is affected by delayed response times and their systems do not work together smoothly while translation results vary in accuracy. The research presents a solution which establishes a unified system that connects Automatic Speech Recognition (ASR) and Neural Machine Translation (NMT) together with web-based real-time communication technologies, creating an efficient pipeline for communication. The system manages live audio during video calls by using optimized speech recognition models to translate subtitles into the listener's chosen language that supports natural cross-lingual communication without interruptions. The system uses WebRTC technology to stream audio in real time while it uses machine learning to transcribe, translate audio for high performance and scalability. The system uses advanced filtering and confidence-based validation techniques to protect against three major ASR output problems that include background noise interference, different accent pronunciations and wrong content generation. The system eliminates the need for outside translation resources by embedding translation into the communication process that reduces user interaction difficulties. The integrated speech translation systems demonstrate their ability to improve communication in education, healthcare and international cooperation by creating digital platforms that users can access and use regardless of their language skills.

Keywords Speech to speech translation (S2ST), Automatic Speech Recognition (ASR), Machine Learning Model (MLM), Web Real Time Communication (WebRTC)


Paper ID IJIFR/V13/E8/060 Page No.: 1350-1358

Subject Area Economics

Authors DR NEETHU S.ARRAKAL

Abstract This study examines the economic factors influencing customer satisfaction and post-purchase behaviour with special reference to VKC Footwear. In a competitive and price-sensitive market, understanding consumer perception of economic value is essential for enhancing satisfaction and loyalty. The study aims to identify the key economic determinants such as price affordability, value for money, price fairness, and cost–benefit perception that contribute to overall customer satisfaction. The research is based on primary data collected through a structured questionnaire. The analysis was carried out using descriptive and inferential statistical tools. Mean and standard deviation were used to assess the level of customer satisfaction across various economic factors. Item-wise and factor-wise analyses were conducted to identify significant contributors to satisfaction and dissatisfaction, and an Economic Satisfaction Index was constructed to measure overall satisfaction. Further, simple linear regression and multiple regression analyses were applied to examine the relationship between economic factors and customer satisfaction, and to identify the most significant predictors influencing post-purchase behaviour. The findings reveal that value for money, price affordability, and financial satisfaction have a strong positive influence on customer satisfaction, while dissatisfaction-related factors such as perceived expensiveness have a relatively lower impact. Regression results indicate that key economic variables significantly predict customer satisfaction and influence post-purchase behaviour. The overall economic satisfaction index shows that customers are moderately to highly satisfied, with low variation in responses.The study concludes that economic factors play a crucial role in shaping customer satisfaction and post-purchase behaviour. Effective pricing and value-based strategies are essential for enhancing customer loyalty and sustaining competitive advantage.

Keywords Customer Satisfaction, Economic Factors, Value for Money, Post-Purchase Behaviour


Paper ID IJIFR/V13/E8/057 Page No.: 1335-1349

Subject Area Education

Authors Dr. Savita Panwar

Abstract In the paper an attempt has been made to do a comparative analysis of session-wise enrolment trend of IGNOU Regional Centre Chandigarh (RCC) and IGNOU Regional Centre Dehradun (RCD) since 2010 to 2025. The paper also focuses on the various student support services provided by both the Regional centres. The comparative analysis of various factors such as use of ICT, social media platform to reach out to the learners for the enhancement of Gross Enrolment Ratio (GER) as per the NEP2020. The difference in the socio-demographic factors of both the regions and its effect of the session-wise enrolment of both the Regional centres. The effect of COVID-19 on the enrolment and the delivery of online and offline student support services by both the RCs and their Learner Support Centres (LSCs). The various strategies used by both the RCs for promotion and publicity of IGNOU programmes on offer and the new programmes being launched from time to time to the prospective learners in the geo-physical inaccessible areas and the marginalized section of the society. For the purpose of the present study the data has been collected from primary and secondary sources. The analysis of the paper is presented in the coherent frame of the study.

Keywords Gross Enrolment Ratio (GER), NEP2020, ICT, Publicity and Promotion, Regional Centres, Learners Support Centres


Paper ID IJIFR/V13/E8/054 Page No.: 1322-1334

Subject Area Law

Authors Sakshi Sharan

Abstract The increasing women's participation in the workforce represents a major change in the social and economic conditions of India. However, women still face difficulties and challenges in achieving equality at work, even with the existence of a strong set of labour laws and progressive policy initiatives and laws protecting them. This research paper critically examines the regulatory framework that governs women’s employment in the formal retail sector in India and analyses the protective framework and the implementation challenges that lead to inefficiency. The paper uses a mixed-method approach, which includes doctrinal analysis of legal provisions that are specifically attuned to women's needs, which includes welfare and health provisions under the Factories Act(1948), the Maternity Benefit Act (2017), the POSH Act (2013), social security laws and the new consolidated labour codes, and empirical insights that are gathered through semi-structured interviews with women workers of the organised retail sector. This methodology aims to provide a detailed view of the gap between law in books and law in action and provide comprehensive recommendations. The paper argues that while India's laws look good on paper, there are still many issues concerning their implementation, such as a lack of knowledge of the rights, poor enforcement, unsafe working hours, informal deductions, and limited facilities at workplaces. The recommendations are inclusive of the employer compliance, targeted inspections being performed, and workplace awareness raising, as well as compulsory social security enrollment and seating arrangements with enforcement, and retail chains being monitored.

Keywords Women workforce participation; Indian labour law; gender justice; maternity benefits; occupational safety; labour codes; POSH; implementation gap


Paper ID IJIFR/V13/E8/053 Page No.: 1365-1376

Subject Area Education

Authors RESMI N C
Dr.R. Jeyanthi

Abstract The integration of technology in education has necessitated a redefinition of teacher competencies, emphasizing the importance of Technological Pedagogical Content Knowledge (TPACK) and teacher self-efficacy. This paper presents a comprehensive review of literature examining the relationship between TPACK and self-efficacy and their collective influence on teaching effectiveness. Drawing on empirical and theoretical studies, the review identifies key themes, including the positive association between TPACK and self-efficacy, the mediating role of self-efficacy, the influence of professional development, and inconsistencies in the alignment between perceived competence and actual knowledge. The review also highlights contextual and demographic factors affecting these constructs. The study concludes by identifying research gaps and proposing directions for future research, particularly in developing integrated models and examining mediating variables.

Keywords TPACK, teacher self-efficacy, pedagogical content knowledge, technology integration, literature review


Paper ID IJIFR/V13/E8/052 Page No.: 1302-1309

Subject Area Sociology

Authors REXEN JACOB R

Abstract Today the world has been experiencing the proportion of old age citizens (above 60 years) in India constitutes 8.6% of the total population (60 million). About 25% of this population currently resides in old age homes across India due to various reasons. The reasons may be social, economic, cultural and familial grounds. The present living conditions of elderly in old age homes are not commendable. But these institutions are providing a cherishing memory and address the vulnerabilities of elderly regarding economy and health conditions. It has been found from research that the setting of old age homes and the employees influence the welfare of the residents and their health care. Despite various services provided, researchers have found that service gaps exist among staff and residents of old age homes s a witnessing issue. In the present scenario, it is the need to address and understand the living standards of the older persons in old age homes, and to study the various services offered by them and how the residents spend their lives there in order to better their conditions of living with considering their economic and health vulnerabilities. The main objectives of the present study include (i) to identify the socio-economic background and roe of old age care homes for elderly (ii) To examine the services of care homes for the health vulnerabilities of elderly (iii) to analyse the institutional support for handling the economic vulnerabilities of elderly in Kerala. The study was based on mixed research methodology and structured interview schedule and guide was used to collect the data from the respondents of Govt Old Age Home Alappuzha, Kerala. The analysis of the data shows that problems the inmates face is: physical disabilities, heath problems, emotional problems, lack f support from children and family members. The source of fund is from donations and they do not engage in any promotional activities.

Keywords Older persons, care home, old age homes, institutional support, economic vulnerability, health vulnerability


Paper ID IJIFR/V13/E8/051 Page No.: 1288-1295

Subject Area Computer Science

Authors Gajula Sai Rohith
V. Vijayalakshmi

Abstract Contemporary retail operations are confronted with escalating complexity in demand forecasting and inventory management, driven by non-linear demand variability, promotional amplification, seasonal fluctuations, and the competitive imperatives of omnichannel commerce. Conventional forecasting paradigms—predicated upon moving averages, exponential smoothing, and linear extrapolation—demonstrably fail to capture the multivariate, non-linear interactions governing retail demand dynamics. This paper introduces SmartRetail AI, a comprehensive, end-to-end retail intelligence platform that integrates ensemble machine learning with classical inventory optimisation theory within a unified data architecture. The system is constructed upon a structured synthetic dataset encompassing 4,250 daily transaction records across five representative Fast-Moving Consumer Goods and e-commerce product categories, spanning 850 operational days. The proposed forecasting engine deploys product-level Random Forest Regressor models—each trained independently to capture category-specific demand dynamics—to generate thirty-day forward demand forecasts. Engineered temporal features including day-of-week indicators, monthly seasonality encodings, weekend binary flags, and promotional activity markers constitute the input feature space. The inventory optimisation module applies the classical safety stock formulation at a 95% service level target (Z = 1.65), computing product-specific reorder points as a function of historical demand variability and a three-day supply lead time. Analytical outputs are persisted in a normalised MySQL relational schema comprising three tables—historical sales, demand forecasts, and inventory metrics—and rendered through an executive intelligence dashboard. Comparative evaluation against moving average baselines demonstrates Mean Absolute Error reductions of 23.4% to 47.8% across product categories, with Root Mean Square Error improvements ranging from 19.7% to 44.1%. The platform achieves a Mean Absolute Percentage Error of 12.7% for high-volume staples and 24.3% for low-velocity electronics, establishing its operational viability across diverse retail demand profiles. These results validate the proposed framework as a practically deployable, analytically rigorous alternative to legacy forecasting methodologies.

Keywords Demand Forecasting; Random Forest Regressor; Inventory Optimisation; Safety Stock; Retail Intelligence; Ensemble Machine Learning; Supply Chain Management


Paper ID IJIFR/V13/E8/050 Page No.: 1279-1287

Subject Area Computer Engineering

Authors Gurramkoda Charan Kumar
S. Usharani
V.Vijayalakshmi

Abstract Contemporary e-commerce platforms generate voluminous transactional and behavioral data streams that, absent a structured analytical framework, remain commercially inert. ShopMind 360 is proposed as a comprehensive, end-to-end behavioral analytics and personalization engine that synthesizes four complementary machine learning and data mining methodologies within a single, automated pipeline. The system operationalizes Recency-Frequency-Monetary (RFM) analysis for multidimensional customer value quantification, K-Means clustering for behavioral segmentation into four actionable customer archetypes (Champions, Loyal Customers, At-Risk, and Hibernating), Random Forest ensemble classification for continuous purchase-probability scoring, and the Apriori algorithm for market-basket association rule mining to generate confidence-ranked product recommendations. The analytical pipeline ingests a synthetic yet realistically parameterized dataset comprising 500 customer profiles, 50 product records, 2,000 transactional orders, and 5,000 browsing interaction logs, structured within a multi-sheet Excel workbook and subsequently persisted to a normalized MySQL relational database. All computed analytical results are surfaced through an interactive Power BI dashboard, providing business stakeholders with filterable, real-time-refreshable visualizations of customer segments, revenue contributions, purchase probability distributions, and product affinity rules. Experimental evaluation demonstrates that the four-segment K-Means model achieves stable cluster centroids with well-separated RFM profiles, the Random Forest classifier attains high discriminative accuracy in identifying high-value customer segments, and Apriori mining yields statistically significant association rules with lift values substantially exceeding unity. The system architecture adheres to modular design principles, enabling independent maintenance and extensibility of each analytical component without pipeline restructuring. ShopMind 360 establishes a replicable, open-source blueprint for data-driven customer engagement in small-to-medium-scale e-commerce operations.

Keywords Recency-Frequency-Monetary Analysis; K-Means Clustering; Random Forest Classification; Apriori Association Rule Mining; Customer Behavioral Segmentation


Paper ID IJIFR/V13/E8/049 Page No.: 1268-1278

Subject Area Computer Engineering

Authors Avadutha Prathibha
S. Usharani

Abstract The increasing complexity of healthcare systems and the global shortage of medical professionals have amplified the need for intelligent clinical decision support systems capable of assisting in early disease diagnosis. This paper presents an AI-driven medical diagnosis support system that leverages machine learning techniques to predict diseases based on patient-reported symptoms. The proposed system utilizes a Random Forest Classifier trained on a high-dimensional dataset comprising over 130 symptoms mapped to more than 40 disease categories. The model employs multi-hot encoding for symptom representation and generates probabilistic predictions with associated confidence scores. The system is implemented as a web-based application using the Django framework, enabling role-based interaction for both patients and doctors. Patients can input symptoms through an intuitive interface, while doctors gain access to aggregated diagnostic insights and patient history. The machine learning pipeline integrates feature importance extraction, allowing visualization of symptom influence using Chart.js, thereby enhancing interpretability. Experimental results demonstrate high classification accuracy and robust performance across diverse symptom combinations. The system significantly reduces diagnostic latency and provides preliminary clinical insights, particularly in resource-constrained environments. The integration of AI with web-based healthcare systems highlights the potential for scalable, accessible, and efficient diagnostic assistance tools.

Keywords Artificial Intelligence; Clinical Decision Support; Random Forest; Disease Prediction; Machine Learning


Paper ID IJIFR/V13/E8/048 Page No.: 1257-1267

Subject Area Computer Science

Authors C. Nawaz Basha
S.Usharani

Abstract The insurance industry confronts two analytically critical and financially consequential challenges: accurate prediction of claim settlement amounts and timely detection of fraudulent claims. Conventional approaches — rule-based heuristics, logistic regression scorecards, and manual adjuster assessments — are demonstrably inadequate for capturing the nonlinear, high-dimensional interactions that characterise modern insurance claim data. This paper presents ClaimSmart AI, a comprehensive, modular, end-to-end machine learning pipeline that addresses both challenges within a unified analytical framework. The system operates on a synthetically generated dataset of 15,000 insurance claim records encompassing 19 attributes spanning policyholder demographics, policy characteristics, vehicle parameters, claim specifics, and behavioural indicators. A dual-model architecture employs a Random Forest Regressor (150 estimators) for claim amount prediction and a Random Forest Classifier (150 estimators, balanced class weights) for binary fraud risk detection, both trained on a stratified 80/20 holdout split with StandardScaler feature normalisation and LabelEncoder categorical transformation. The regression model achieves a Mean Absolute Error below INR 15,000 and an R-squared coefficient of determination exceeding 0.70, while the classification model delivers accuracy above 0.80, fraud-class recall exceeding 0.74, and F1-Score above 0.76, surpassing logistic regression and rule-based baselines on equivalent evaluation protocols. Prediction outputs are enriched with four derived business metrics — predicted claim amount, claim variance, fraud risk probability, and a three-tier fraud risk category — and persisted to a MySQL relational database for direct consumption by Power BI and enterprise analytics platforms. Eight publication-quality visualisation charts provide comprehensive analytical coverage from fraud distribution and regional heatmaps to actual-versus-predicted scatter analysis. A mysqldump-format SQL export module ensures enterprise portability and regulatory archival compliance. The complete pipeline executes through a single orchestration script, establishing ClaimSmart AI as both a rigorous academic contribution and a practical template for production insurance analytics deployment.

Keywords Random Forest; Insurance Fraud Detection; Claim Amount Regression; Imbalanced Classification; Business Intelligence Integration


Paper ID IJIFR/V13/E8/047 Page No.: 1248-1256

Subject Area Computer Engineering

Authors G Mallikarjuna Reddy
V.Vijayalakshmi
S.Usharani

Abstract Online voting systems represent one of the most demanding intersections of cybersecurity, civic participation, and artificial intelligence, requiring simultaneous guarantees of voter authentication, vote uniqueness, ballot integrity, fraud detection, and transparent auditability within a framework accessible to non-technical citizens. Existing e-voting platforms either rely on proprietary cryptographic protocols with limited AI-driven fraud detection, or prioritise accessibility while sacrificing the robust integrity mechanisms demanded by high-stakes elections. This paper presents an AI-Based Online Voting System that addresses these co-requirements through four integrated AI-driven components embedded within a production-quality Django 5.x web application. The system architecture implements: (1) an Isolation Forest-based anomaly detection engine that computes a real-time risk score for every vote submission from three behavioural features — hour of submission, minute of submission, and IP address vote frequency — flagging suspicious events within the same HTTP request cycle without perceptible latency; (2) a TextBlob NLP sentiment analysis module that computes a manifesto polarity score (range -10 to +10) for each candidate biography using lazy computation triggered on first election page view; (3) a TF-IDF and cosine similarity AI Candidate Matcher that ranks candidates by alignment with voter-submitted preference text using Scikit-learn's TfidfVectorizer with English stop-word removal; and (4) a SHA-256 cryptographic vote hashing mechanism that generates a tamper-evident fingerprint for each Vote record at save time. A multi-role architecture distinguishes Registered Voters, Election Administrators, and Audit Officers, with role enforcement through Django's @login_required and @user_passes_test decorators. Vote uniqueness is guaranteed through dual-layer enforcement: application-level duplicate checking and database-level unique_together constraints on the (voter, election) pair. Empirical evaluation on a 500-vote simulation dataset demonstrates a fraud detection precision of 91.2%, a false positive rate of 4.3%, a candidate matching accuracy of 89.7% against expert preference alignment, and a mean end-to-end vote processing latency of 87 milliseconds on consumer CPU hardware. The system is implemented entirely on open-source technologies, providing a reproducible reference architecture for AI-enhanced civic technology.

Keywords Online voting system; Isolation Forest; anomaly detection; TF-IDF candidate matching; NLP sentiment analysis; SHA-256 vote integrity; Django; electoral security


Paper ID IJIFR/V13/E8/046 Page No.: 1240-1246

Subject Area Computer Science

Authors Madde Sanghavi
B. Shireesha

Abstract The global burden of lifestyle-related diseases including obesity, type-2 diabetes, and cardiovascular conditions has established personalized nutrition as a healthcare necessity. Conventional dietary guidance, grounded in population-averaged standards such as Recommended Dietary Allowances, fails to account for individual metabolic variation. NutriGen AI addresses this limitation through a machine learning-powered personalized nutrition recommendation engine. The system employs a Random Forest Regressor trained on one thousand synthetic user profiles to predict Total Daily Energy Expenditure (TDEE), a hybrid filtering recommendation architecture combining K-Nearest Neighbors content-based filtering with Truncated Singular Value Decomposition collaborative filtering to produce meal suggestions, and goal-oriented macro-nutrient distribution logic for weight loss, maintenance, and muscle gain objectives. Delivered through a Flask RESTful backend and a glassmorphism-styled HTML5/CSS3/JavaScript frontend with Chart.js visualizations, the system democratizes access to personalized nutritional guidance using entirely open-source technologies. Experimental evaluation demonstrates effective TDEE estimation and nutritionally aligned meal recommendations across diverse user profiles.

Keywords Personalized Nutrition, Machine Learning, Random Forest Regressor, Hybrid Recommendation System, KNN, Collaborative Filtering, TDEE Estimation, Flask, Health Informatics


Paper ID IJIFR/V13/E8/045 Page No.: 1231-1239

Subject Area Computer Engineering

Authors Mutra Rekha
B. Shireesha

Abstract Smart manufacturing (Industry 4.0) demands real-time predictive intelligence to eliminate reactive quality management. QualityPredict AI is a comprehensive, end-to-end machine learning platform for smart factories that predicts continuous product quality scores and classifies manufacturing defects by analyzing production telemetry including temperature, pressure, mechanical vibration, machine rotational speed, ambient humidity, and machine age. The system is trained on a synthetic dataset of ten thousand manufacturing records generated to replicate real-world industrial conditions. Three ensemble regression algorithms — Random Forest Regressor, LightGBM Regressor, and Gradient Boosting Regressor — are comparatively evaluated for quality score prediction, while a Random Forest Classifier provides binary defect detection with calibrated probability scores. Feature importance analysis reveals mechanical vibration as the dominant quality predictor (˜63% variance explained), followed by machine age (˜22%) and operating temperature (˜8%). The best regression model achieves an R² score exceeding 0.88 on a held-out 20% test split, and the defect classifier achieves accuracy above 0.90. Four professional analytical visualizations communicate findings to production managers and quality engineers. The complete pipeline from data generation through model training, visualization, and live prediction simulation is implemented in modular Python, with all model artifacts serialized via Joblib for production deployment.

Keywords Manufacturing Quality Prediction, Industry 4.0, Random Forest, LightGBM, Gradient Boosting, Defect Classification, Feature Importance, Smart Manufacturing, Statistical Process Control, Predictive Quality Management


Paper ID IJIFR/V13/E8/044 Page No.: 1296-1301

Subject Area COMMERCE

Authors BIJI B

Abstract Micro, Small, and Medium Enterprises (MSMEs) play a pivotal role in economic development by generating employment, fostering innovation, and promoting inclusive growth. In the Indian state of Kerala, MSMEs are a cornerstone of the industrial ecosystem, supported by a range of government initiatives aimed at financial assistance, skill development, and ease of doing business. This study explores the effectiveness and scope of Kerala’s MSME support schemes, including capital subsidies, credit facilitation programs, and digital governance initiatives. It examines how these schemes contribute to entrepreneurial growth, particularly among marginalized groups such as women and first-time entrepreneurs. Despite a well-structured policy framework, challenges such as procedural complexity, limited awareness, and accessibility barriers persist. The paper highlights the need for improved implementation strategies, better outreach, and continuous policy refinement to maximize the impact of these schemes. Overall, Kerala’s MSME model offers valuable insights into building a sustainable and inclusive small business ecosystem.

Keywords MSMEs, Entrepreneurship, Government Schemes, Economic Development, Small Business Support, Kerala


Paper ID IJIFR/V13/E8/042 Page No.: 1222-1230

Subject Area Computer Science

Authors K. Aparna
V. Vijayalakshmi

Abstract Campus placement plays a vital role in connecting engineering students with employers, but many institutions still rely on manual methods such as spreadsheets and emails, leading to inefficiencies. The Intelligent College Placement Management Platform is a full-stack web application developed using the Django framework to automate and streamline the entire placement process.The system supports three user roles: students, recruiters, and placement officers. Students can create profiles, upload resumes, apply for jobs, and track application status. Recruiters can post job openings, review applications, and issue offer letters. Placement officers manage the system through a centralized dashboard that provides real-time insights into placement activities.A key feature of the platform is the simulated AI-based matching engine, which uses natural language processing techniques to compare student skills with job requirements and generate a match score. This improves decision-making for both students and recruiters. The system also enforces placement policies such as the One-Student-One-Offer rule.Built using Python, Django, SQLite, and Bootstrap, the platform ensures efficient, transparent, and scalable placement management, improving coordination and reducing manual effort in campus recruitment processes

Keywords Campus Placement System, Django Web Application, Placement Management, AI Matching Engine, Natural Language Processing (NLP), Resume Screening, Job Recommendation, Student Recruitment, Skill Matching, Full Stack Development


Paper ID IJIFR/V13/E8/041 Page No.: 1214-1221

Subject Area Computer Engineering

Authors Kadiri Bhuvaneswari
V. Vijayalakshmi

Abstract The contemporary recruitment landscape is characterised by an exponentially growing volume of applications that overwhelm keyword-based Applicant Tracking Systems (ATS), which treat language as a bag of isolated tokens and systematically fail to recognise semantically equivalent competency descriptions. This paper presents ResumeMatch Pro AI, a full-stack intelligent recruitment support system that addresses these representational inadequacies by deploying Sentence-BERT (SBERT), specifically the all-MiniLM-L6-v2 pre-trained transformer model, to encode both resume and job description documents into 384-dimensional dense semantic vector representations. Cosine similarity computed between these embeddings yields a holistic semantic match score that is robust to paraphrase, synonymy, and terminological variation — failure modes that fundamentally undermine conventional lexical matching. The system further employs the spaCy natural language processing library in conjunction with a PhraseMatcher-based skill extractor to perform fine-grained skill gap analysis, enumerating precisely which required competencies are present in the candidate profile and which are absent, thereby transforming an abstract score into an actionable decision-support artefact. The architecture follows a clean two-tier client-server separation: a FastAPI backend exposes RESTful endpoints for single-candidate matching and multi-candidate ranking, whilst a React/Vite frontend renders match results through circular gauge visualisations, colour-coded skill tags, and animated result panels designed for non-technical recruiters. Experimental evaluation using representative professional domain test cases demonstrates that the SBERT-based approach correctly resolves synonym ambiguities — crediting a candidate describing experience in 'statistical learning' against a role requiring 'machine learning' — where keyword systems assign a zero-overlap score. The system achieves single-request response times of one to three seconds on CPU-only infrastructure, confirming practical deployability. The proposed framework demonstrates that the combination of transformer-based holistic semantic embeddings with explicit rule-based skill extraction yields a recruitment tool that is simultaneously more accurate, more transparent, and more actionable than the lexical-matching status quo.

Keywords Sentence-BERT; Semantic Resume Matching; Recruitment Automation; Cosine Similarity; Skill Gap Analysis


Paper ID IJIFR/V13/E8/040 Page No.: 1206-1213

Subject Area Computer Science

Authors Jeripiti Reddy Prasad
V.Vijayalakshmi

Abstract The exponential growth of digital financial transactions in India has precipitated a commensurate escalation in sophisticated payment fraud, necessitating intelligent, adaptive detection systems capable of operating at scale. This paper presents SecurePay Shield, a comprehensive, end-to-end machine learning pipeline engineered for real-time identification of fraudulent transactions within the Indian digital payment ecosystem. The proposed system addresses three principal challenges endemic to fraud detection: severe class imbalance, high-dimensional heterogeneous feature spaces, and the requirement for probabilistic, interpretable risk assessments amenable to regulatory scrutiny. The architecture employs an ensemble learning strategy integrating three complementary algorithms: a Random Forest Classifier (200 estimators), a Gradient Boosting Classifier (150 estimators), and an Isolation Forest anomaly detection model. A domain-specific feature engineering pipeline transforms 23 raw transaction attributes into a 35-dimensional feature space, computing composite risk indicators including the Risk_Composite score, IP_Risk_Score, and Velocity_Score, which collectively emerge as the most discriminative predictors of fraudulent activity. Class imbalance is mitigated through the application of the Synthetic Minority Over-sampling Technique (SMOTE), yielding a balanced training corpus of 22,560 instances. The Random Forest model, selected as the production deployment candidate, achieves an Area Under the Receiver Operating Characteristic Curve (AUC-ROC) of 1.0000 and an F1-Score of 1.0000 on the held-out test partition, with five-fold stratified cross-validation yielding a mean F1 of 0.9978 (±0.0021), confirming model robustness and generalizability. Predictions and analytical artifacts are persisted in a structured MySQL database comprising five normalized tables, and operational insights are surfaced through a four-page Microsoft Power BI dashboard supporting real-time fraud monitoring. The system is demonstrated on a synthetic dataset of 15,000 Indian financial transactions and evaluated against 2,000 prospectively generated records, achieving a holistic prediction corpus of 17,000 transactions with a four-tier risk stratification (Critical, High, Medium, Low).

Keywords Fraud Detection; Ensemble Learning; SMOTE; Random Forest; Digital Payment Security; Risk Scoring; Feature Engineering


Paper ID IJIFR/V13/E8/039 Page No.: 1197-1205

Subject Area Computer Engineering

Authors Koppala Jagadeesh
V.Vijayalakshmi
Dr.S.Usharani

Abstract The insurance industry processes millions of claims annually through predominantly manual, paper-based workflows that impose substantial administrative overhead, systemic processing delays, and significant vulnerability to fraudulent submissions. These inefficiencies translate directly into customer dissatisfaction, escalating operational costs, and regulatory compliance risk. Despite incremental digitisation efforts in recent decades, the majority of mid-tier and small-scale insurance operators continue to rely on manual adjudication processes that lack systematic validation mechanisms, consistent decision frameworks, and real-time transparency for policyholders. This paper presents an AI-Driven Insurance Claim Processing System, a comprehensive web-based platform developed using the Django 4.x framework, Python 3.10, and SQLite — engineered to automate and orchestrate the complete lifecycle of insurance claim management. The proposed system implements a three-tier Model-View-Template (MVT) architecture encompassing a responsive presentation layer, a rule-based application logic engine, and a relational data persistence layer. Seven functionally decomposed modules address user authentication, role-based access control, policy management, automated claim validation, administrative adjudication, real-time status notification, and database lifecycle management. The core contribution of the system is a deterministic automated validation engine that evaluates each submitted claim against two critical conditions — policy coverage limit adherence and policy temporal validity — eliminating ineligible claims at the point of submission without human intervention. Validated claims are routed to an administrative dashboard providing centralised oversight, statistical summaries, and structured approval workflows. Empirical evaluation on a simulated dataset of 500 claims demonstrates a claim processing time reduction of 74.3% relative to a manual baseline, a validation accuracy of 99.6%, and a false positive rejection rate of 0.4%. The system's modular Django architecture ensures extensibility toward future integration of machine learning-based fraud detection, OCR-driven document processing, and cloud-scale PostgreSQL deployment.

Keywords Insurance claim processing; Django MVT architecture; automated validation; rule-based classification; role-based access control; digital workflow automation; claim adjudication


Paper ID IJIFR/V13/E8/038 Page No.: 1189-1196

Subject Area Computer Engineering

Authors Kondakavali Vani
V.Vijayalakshmi
Dr.S.Usharani

Abstract Construction sites consistently rank among the most hazardous occupational environments worldwide, with head injuries from falling or flying objects identified as a primary contributor to construction fatalities in every major market. Despite the universal regulatory mandate for safety helmet usage, non-compliance remains pervasive owing to the practical impossibility of maintaining continuous manual supervision across large, complex sites. Traditional automated monitoring approaches based on conventional computer vision techniques have demonstrated insufficient accuracy for reliable deployment in the visually complex conditions typical of active construction environments, while commercial AI-based platforms impose subscription costs prohibitive to small and medium contractors. This paper presents SafeHelmet Vision AI, a deep learning-based industrial safety monitoring system designed to automate helmet compliance detection at construction sites and related industrial workplaces. The proposed system employs a YOLOv8n (nano) object detection model trained via transfer learning from COCO-pretrained weights on a domain-specific dataset of 4,200 annotated construction site images encompassing 9,800 helmet and 8,200 worker bounding box instances. Training was conducted for 100 epochs with AdamW optimisation (lr0 = 0.001), comprehensive data augmentation including mosaic, HSV perturbation, random flip, rotation (±10°), and scale variation (±50%), yielding a validation mean Average Precision at IoU = 0.50 (mAP50) of 97.8%, a precision of 95.2%, and a recall of 94.9%. The trained model is integrated into a Streamlit web application that accepts uploaded construction site images in JPG, PNG, or BMP formats and returns annotated detection results with bounding boxes, class labels, and confidence scores within a mean inference latency of 165 milliseconds on standard CPU hardware. An automated safety compliance assessment engine evaluates helmet-to-person count ratios and generates colour-coded violation alerts. Comparative evaluation demonstrates a 27.2 percentage-point precision advantage and a 33.9 percentage-point recall advantage over a traditional Haar cascade baseline. The complete system requires no client-side installation and is deployable to Streamlit Cloud from a GitHub repository with a single configuration step, making enterprise-grade safety monitoring accessible to safety personnel without specialised technical training.

Keywords Safety helmet detection; YOLOv8; construction site safety; personal protective equipment; transfer learning; real-time object detection; Streamlit deployment


Paper ID IJIFR/V13/E8/037 Page No.: 1180-1188

Subject Area Computer Science

Authors Pudu Bhargava
S. Manjunath Reddy

Abstract Employee attrition is a major challenge for organizations, leading to productivity loss, higher recruitment costs, and disruption in operations. This paper presents an Employee Performance and Attrition Prediction System that combines HR management with machine learning and generative AI. The system is built using Django with SQLite as the database and uses Python libraries such as Scikit-learn for prediction, Openpyxl for data export, and Google Gemini API for generating explanations. It stores structured employee data including performance, attendance, and evaluations for analysis. A classification model predicts attrition risk based on factors like performance rating, projects completed, and salary. To improve interpretability, a generative AI module explains predictions in simple HR-friendly language. Overall, the system improves data management, enhances prediction accuracy, and provides an easy-to-understand, scalable solution for workforce analysis and decision-making.

Keywords Seismic Forecasting; LSTM; ARIMA; Time Series Analysis; Disaster Management


Paper ID IJIFR/V13/E8/035 Page No.: 1172-1179

Subject Area Computer Science

Authors Patnam Jayasree
Manjunath Reddy
Dr. S. Usharani

Abstract Earthquakes are among the most devastating natural disasters due to their sudden occurrence and the severe damage they cause to human life and infrastructure. Traditional seismic monitoring systems primarily focus on real-time detection and post-event analysis, offering limited capability for forecasting future seismic activity. This paper presents SeismoPredict AI, an intelligent seismic activity forecasting system that leverages both deep learning and statistical approaches to analyze historical earthquake data and predict future trends. The proposed system integrates Long Short-Term Memory (LSTM) networks to capture complex non-linear temporal dependencies and Autoregressive Integrated Moving Average (ARIMA) models to provide stable and interpretable time-series forecasts.The system is designed with a user-friendly interface using Streamlit, enabling users to upload datasets, visualize seismic patterns, train predictive models, and generate forecasts without requiring advanced technical expertise. Additionally, the system incorporates automated risk classification and alert generation mechanisms to support early warning and disaster preparedness. Experimental analysis demonstrates that the hybrid LSTM–ARIMA approach improves prediction reliability and trend consistency compared to individual models. The proposed system serves as an effective decision-support tool for researchers, policymakers, and disaster management authorities by providing meaningful insights into seismic activity trends. Although precise earthquake prediction remains inherently uncertain, the system contributes to proactive risk assessment and enhances preparedness strategies through data-driven forecasting

Keywords Seismic Forecasting; LSTM; ARIMA; Time Series Analysis; Disaster Management


Paper ID IJIFR/V13/E8/034 Page No.: 1163-1171

Subject Area Computer Engineering

Authors Aluganti Vishnu Priya
M.Gowthami

Abstract DermAssist AI is an advanced artificial intelligence-powered dermatological diagnostic assistance system that leverages deep learning and computer vision to analyse skin lesion images and provide preliminary diagnostic insights for a wide spectrum of common skin conditions. The system is built upon a Convolutional Neural Network (CNN) architecture enhanced with transfer learning from a pre-trained EfficientNetB3 model, trained on the HAM10000 dataset containing over ten thousand labelled dermatoscopic images spanning seven diagnostic categories: Melanocytic nevi, Melanoma, Benign keratosis-like lesions, Basal cell carcinoma, Actinic keratoses, Vascular lesions, and Dermatofibroma. The complete data science and software engineering lifecycle is implemented, encompassing systematic data preprocessing, augmentation, model training, performance evaluation using medical-grade metrics (AUC, sensitivity, specificity), and deployment as an interactive Flask web application. An explainability layer using Gradient-weighted Class Activation Mapping (Grad-CAM) highlights the specific image regions most influential in the diagnostic prediction. The system achieves a macro-averaged AUC of 0.889 across all seven classes, demonstrating strong generalisation capability. DermAssist AI represents a meaningful contribution to the democratisation of dermatological care through artificial intelligence.

Keywords Skin Disease Detection, Deep Learning, EfficientNet, Transfer Learning, Grad-CAM, HAM10000, Dermatology AI, Flask Deployment, Medical Image Analysis, CNN


Paper ID IJIFR/V13/E8/032 Page No.: 1154-1161

Subject Area Computer Engineering

Authors Gorrela Kaveri
M.Gowthami
Dr.S.Usharani

Abstract Driver fatigue constitutes one of the most critical, yet frequently underestimated, contributors to fatal road traffic accidents worldwide. Epidemiological studies conducted by the World Health Organization and regional road safety authorities consistently attribute fifteen to twenty percent of motorway collisions to drowsiness-related impairment, with the actual incidence likely higher owing to systematic under-reporting. The principal challenge is that fatigue onset is gradual and subjective, rendering drivers incapable of reliably self-assessing their level of impairment, particularly during extended sessions of monotonous driving. Existing intervention strategies—encompassing hours-of-service legislation, electronic logging devices, and lane departure warning systems—operate at the policy or vehicle-dynamics level, and are intrinsically incapable of detecting the physiological precursors of drowsiness in real time. This paper presents DriveSafe Vision, a non-intrusive, camera-based, real-time driver fatigue detection system implemented entirely in Python using open-source libraries. The proposed system continuously acquires video from a standard webcam and applies dlib's 68-point facial landmark shape predictor to localize anatomical reference points around the ocular and perioral regions in each frame. Two normalized geometric metrics are computed per frame: the Eye Aspect Ratio (EAR), formulated as the ratio of summed vertical inter-landmark distances to twice the horizontal inter-landmark distance across both eyes, and the Mouth Aspect Ratio (MAR), formulated analogously for the outer lip contour. Drowsiness is inferred when the EAR drops below a calibrated threshold of 0.25 for a sustained window of twenty or more consecutive frames, while yawning is detected when the MAR exceeds 0.60 for fifteen or more consecutive frames. The system incorporates a heads-up display presenting session duration, cumulative blink and yawn counts, and animated EAR/MAR progress bars with threshold markers. Upon detection of a fatigue event, an unmistakable translucent red visual overlay and an optional audio alarm are simultaneously activated. Evaluation on a controlled volunteer cohort demonstrates a drowsiness detection precision of 91.3%, a recall of 88.7%, an F1-score of 89.98%, and a mean per-frame processing latency of 38.4 milliseconds on a mid-range consumer CPU, corresponding to an effective monitoring rate of approximately 26 frames per second. The system requires no proprietary hardware and is deployable on any platform supporting Python 3.8 or later, positioning it as a viable fatigue monitoring solution for commercial fleet operators, driving simulator research, and individual vehicle retrofit applications.

Keywords Driver fatigue detection; Eye Aspect Ratio; Mouth Aspect Ratio; Facial landmark localization; Computer vision; Real-time monitoring


Paper ID IJIFR/V13/E8/031 Page No.: 1138-1147

Subject Area Computer Engineering

Authors Karamala Sai Pravallika
V. Vijayalakshmi

Abstract Financial markets are driven by both quantitative fundamentals and qualitative information flows — news articles, analyst commentary, press releases, and regulatory announcements — that collectively shape market participant sentiment and govern short-term price dynamics. Translating unstructured financial text into structured predictive signals constitutes a central challenge in computational finance. This paper presents FinSent AI, an end-to-end, modular machine learning pipeline that ingests live financial news from RSS feeds, applies FinBERT — a BERT-based transformer language model pre-trained on financial text corpora — to derive continuous daily sentiment scores, and combines those scores with classical technical price features to train classification models predicting next-day stock price directional movement. The system is demonstrated using Apple Inc. (AAPL) as the primary ticker, sourcing historical OHLCV data via the Yahoo Finance API and news headlines from Yahoo Finance and Reuters RSS feeds. Text preprocessing eliminates noise including URLs, punctuation, and stopwords, after which FinBERT classifies each article as positive, negative, or neutral with an associated confidence score. Daily aggregation of per-article signed sentiment scores — computed as the product of polarity label and model confidence — yields a continuous sentiment signal temporally aligned to the stock price time series. Feature engineering yields a seven-dimensional input matrix comprising closing price, trading volume, daily return, five-day and ten-day moving averages, five-day rolling return volatility, and the daily sentiment score. Two ensemble classifiers — Random Forest and XGBoost — are trained on an 80/20 chronological train-test split to prevent data leakage. The superior model is deployed through an interactive four-tab Streamlit web dashboard delivering stock and sentiment visualization, prediction overlays, news browsing, and real-time next-day directional forecasts with confidence scores. The complete architecture is reproducible, extensible to any tradeable ticker, and readily deployable on cloud infrastructure.

Keywords Financial Sentiment Analysis; FinBERT; Ensemble Learning; Stock Price Prediction; Natural Language Processing


Paper ID IJIFR/V13/E8/030 Page No.: 1129-1137

Subject Area Computer Engineering

Authors Akula Tharun
Dr. S. Usharani

Abstract Hospital overcrowding, rooted in a systemic mismatch between highly dynamic patient demand and coarse-grained resource allocation practices, represents a critical patient safety and operational efficiency challenge in contemporary acute-care environments. Existing hospital management systems are predominantly retrospective in orientation, recording historical events without providing the predictive intelligence necessary for proactive resource deployment. MediFlow Optimizer addresses this fundamental gap through the design and implementation of a four-tier, machine learning-augmented hospital resource and patient flow intelligence platform. The proposed system integrates an Excel-based data acquisition layer, a Python-driven analytics engine employing a Random Forest Regressor (100 estimators) trained on temporal features—hour of day, day of week, and weekend indicator—extracted from historical admission records, a MySQL relational database persistence layer comprising three primary tables and three pre-aggregated analytical views, and a Microsoft Power BI interactive dashboard layer delivering operational intelligence to hospital administrators and clinical quality managers. The Random Forest model generates hourly patient inflow forecasts at seven-day granularity with a Mean Absolute Error (MAE) of approximately 0.85 patients per hour, a Root Mean Squared Error (RMSE) of 1.20, and a coefficient of determination (R²) of 0.78, outperforming baseline Linear Regression, standalone Decision Tree, and Gradient Boosting alternatives across all three evaluation metrics. Each forecasted hourly period is accompanied by peak-status classification (Normal, High, or Critical Peak) and actionable resource recommendations specifying appropriate clinical staffing ratios and bed deployment targets. The system further computes and tracks four daily key performance indicators—average patient wait time, total admission volume, high-severity case count, and bed utilization rate—presented through a custom dark-themed Power BI dashboard designed for operational deployment in clinical monitoring environments. The complete platform is implemented exclusively with open-source technologies, rendering it reproducible and accessible to healthcare organizations without investment in proprietary analytical infrastructure.

Keywords Hospital Patient Flow Prediction; Random Forest Regressor; Healthcare Resource Allocation; Business Intelligence Dashboard; Time-Series Forecasting


Paper ID IJIFR/V13/E8/029 Page No.: 1122-1128

Subject Area Computer Engineering

Authors Bajanthri Reddy Kishore
Dr. S. Usharani

Abstract Contemporary power systems are subject to increasingly volatile consumption patterns driven by urbanization, industrialization, and the proliferation of smart devices, exposing the limitations of conventional statistical forecasting methodologies. This paper presents PowerPulse Analytics, an end-to-end intelligent energy demand forecasting and load optimization framework that integrates ensemble machine learning with a structured data pipeline and interactive visualization. The system employs a Random Forest Regressor trained on a multi-dimensional dataset encompassing temporal attributes, regional consumer segmentation, ambient environmental variables (temperature and humidity), and holiday indicators to generate hourly energy consumption forecasts over rolling seven-day horizons. The architecture is organized into five functional layers: Data Acquisition, Feature Engineering and Preprocessing, Machine Learning Inference, Persistent Storage via a MySQL relational database, and Decision-Support Visualization through a Power BI dashboard. Feature engineering transforms raw timestamps into discriminative temporal signals including hour-of-day, day-of-week, and month, which are critical for capturing diurnal and seasonal consumption cycles. Experimental validation demonstrates that the Random Forest model achieves superior predictive accuracy compared to baseline statistical methods, with a Mean Absolute Error (MAE) below 4.2 kWh and an R² coefficient exceeding 0.91 across residential, commercial, and industrial consumer segments. The automated, modular pipeline architecture ensures reproducibility, scalability, and seamless integration with relational database infrastructure. The proposed framework provides utility operators with actionable decision-support intelligence to proactively mitigate demand spikes, reduce grid imbalances, and optimize resource allocation. Results demonstrate that machine learning-driven forecasting constitutes a substantively superior alternative to conventional heuristics, establishing a scalable blueprint for smart grid energy management systems.

Keywords Energy Demand Forecasting; Random Forest Regressor; Smart Grid Optimization; Temporal Feature Engineering; Decision-Support Analytics


Paper ID IJIFR/V13/E8/028 Page No.: 1116-1121

Subject Area Computer Engineering

Authors Chakali Gireesh
Dr. S. Usharani

Abstract The persistent challenge of banking customer churn imposes substantial revenue attrition on financial institutions operating within hyper-competitive digital environments. This paper presents Retain360, a comprehensive end-to-end analytical platform that addresses this challenge through the systematic integration of ensemble classification, survival analysis, and explainable artificial intelligence (XAI). The system is trained and evaluated on the IBM Banking Customer Churn Dataset, encompassing demographic, transactional, and service-usage attributes for 7,043 customer records. A Random Forest classifier, trained with class-balanced weighting and hyperparameter-optimised via Grid Search Cross-Validation, achieves an F1-Score of approximately 0.62 and a ROC-AUC of 0.85 on the held-out test partition — demonstrating discriminative capability substantially exceeding random baselines and competitive with state-of-the-art benchmarks. The survival analysis component employs the lifelines library to fit Kaplan-Meier survival curves and a Cox Proportional Hazards (Cox PH) model, enabling the derivation of individualised hazard functions and survival probability trajectories over customer tenure. These temporally-resolved risk profiles underpin a personalised Customer Lifetime Value (CLTV) estimator that translates survival-derived expected tenure into quantitative revenue projections. Model interpretability is achieved through a tri-layer explainability framework comprising Permutation Importance for global feature ranking, Partial Dependence Plots (PDPs) for marginal effect visualisation, and SHAP (SHapley Additive exPlanations) force plots for instance-level prediction attribution. The complete system is operationalised as a Flask-based web application delivering real-time churn probability scores, risk gauges, SHAP explanations, and survival visualisations through a form-driven interface accessible to non-technical banking professionals. Retain360 thus bridges the methodological gap between academic machine learning research and actionable, production-grade customer retention intelligence.

Keywords Customer Churn Prediction; Random Forest; Cox Proportional Hazards; SHAP Explainability; Customer Lifetime Value


Paper ID IJIFR/V13/E8/027 Page No.: 1108-1115

Subject Area Computer Engineering

Authors A. Pramod Kumar
Dr. S. Usharani

Abstract Accurate crop yield prediction is a critical component in ensuring sustainable agricultural development and efficient resource utilization. Traditional estimation methods, primarily based on manual surveys and empirical observations, are often limited by scalability, time constraints, and their inability to model complex interactions among agronomic variables. This paper presents AgroYield AI, an interpretable machine learning framework for crop yield prediction using multi-dimensional agricultural data. The proposed system leverages a Random Forest regression model trained on diverse features including crop type, seasonal variations, geographical location, rainfall, fertilizer usage, and pesticide application. A robust preprocessing pipeline incorporating categorical encoding and feature normalization is employed to enhance predictive performance. The system integrates predictive modeling with an interactive visualization interface developed using Streamlit and D3.js, enabling real-time data exploration and decision support. Experimental evaluation demonstrates that the proposed model achieves high predictive accuracy and strong generalization capability compared to conventional regression approaches. Furthermore, feature importance analysis provides transparency and interpretability, allowing stakeholders to understand the influence of key agricultural parameters on yield outcomes. The proposed framework offers a scalable and practical solution for data-driven agricultural decision-making, supporting farmers, researchers, and policymakers.

Keywords Crop Yield Prediction; Machine Learning; Random Forest; Precision Agriculture; Data Analytics


Paper ID IJIFR/V13/E8/026 Page No.: 1099-1107

Subject Area Computer Engineering

Authors Bonasi Anusha
Dr. S. Usharani

Abstract In the modern legal and corporate environment, contract review remains a critical yet resource-intensive task, often requiring significant time, expertise, and manual effort. Traditional approaches to reviewing legal documents are prone to human error, inconsistencies, and inefficiencies, especially when dealing with large volumes of contracts. This paper presents ContractRisk Analyzer AI, an intelligent system designed to automate the identification and classification of legal risk within contract documents using Natural Language Processing (NLP) techniques. The proposed system processes legal contracts in both PDF and plain-text formats, extracting and segmenting the content into individual clauses using advanced sentence tokenization methods. Each clause is evaluated against a structured legal risk knowledge base categorized into High, Medium, and Low risk levels. The classification process is driven by a rule-based NLP engine that ensures transparency by identifying the exact phrases responsible for risk detection. A weighted scoring mechanism aggregates clause-level risks to generate an overall document risk percentage, providing a clear and interpretable assessment. Furthermore, the system integrates an intelligent recommendation module that offers actionable insights based on detected risks, assisting users in making informed legal decisions. Implemented as a web-based application using the Flask framework, the system provides an intuitive user interface with interactive visualizations, including a dynamic risk gauge and clause-level analysis tables. The proposed solution demonstrates the effectiveness of explainable AI in the legal domain, offering a scalable, accessible, and efficient alternative to traditional contract review processes. It significantly reduces review time while maintaining analytical accuracy, making it valuable for legal professionals, organizations, and academic users.

Keywords Contract Analysis; Natural Language Processing; Legal Risk Assessment; Clause Classification; Explainable AI


Paper ID IJIFR/V13/E8/025 Page No.: 1086-1098

Subject Area Computer Engineering

Authors Kodavati Mounika
B. Shreesha
Dr. Usha Rani

Abstract Credit risk assessment plays a vital role in ensuring the financial stability and sustainability of lending institutions. Traditional credit scoring methods, primarily based on statistical models such as logistic regression, often fail to capture complex, non-linear relationships inherent in borrower data. This limitation results in reduced predictive performance, especially in dynamic financial environments. To address these challenges, this paper presents CreditShield AI, an explainable machine learning-based loan default risk prediction system designed to enhance accuracy, transparency, and reliability in credit decision-making.The proposed system leverages the Give Me Some Credit dataset, comprising over 150,000 borrower records with multiple financial and behavioral attributes. A structured data pipeline is developed, including data preprocessing, missing value imputation, outlier detection, feature scaling, and exploratory data analysis. Robust statistical techniques such as median imputation and percentile-based winsorization are applied to ensure data quality and consistency. Furthermore, the system adopts Robust Scaler normalization to mitigate the impact of extreme values.A key contribution of this work is the emphasis on explainable AI, ensuring that model predictions can be interpreted in compliance with financial regulatory standards. The system is designed to integrate advanced machine learning models and interpretability techniques such as SHAP and LIME in future stages. The proposed framework not only improves prediction capability but also promotes fairness, transparency, and responsible AI practices in financial risk management.

Keywords Credit Risk Prediction; Machine Learning; Loan Default; Explainable AI; Financial Analytics


Paper ID IJIFR/V13/E8/024 Page No.: 1070-1085

Subject Area Computer Engineering

Authors K. Bhavani Sankar
V.Vijayalakshmi
Dr. Usha Rani

Abstract Natural disasters such as earthquakes, floods, hurricanes, and wildfires cause extensive damage to infrastructure and human life, making rapid and accurate damage assessment a critical requirement for effective disaster response. Traditional ground-based assessment techniques are time-consuming, risky, and limited in spatial coverage, which delays emergency decision-making processes. To address these limitations, this paper presents DisasterVision AI, an automated satellite imagery analysis system that leverages deep learning for large-scale building damage assessment. The proposed system utilizes a modified Single Shot MultiBox Detector (SSD) with a VGG-16 backbone, enhanced to process six-channel input by combining pre-disaster and post-disaster satellite images. This dual-input architecture enables the model to learn visual differences between temporal image pairs, improving damage detection accuracy. The model is trained using the xView2 dataset, which provides annotated satellite imagery with four damage categories: no-damage, minor-damage, major-damage, and destroyed. The system incorporates advanced training techniques including data augmentation using Albumentations, OneCycle learning rate scheduling, and AdamW optimization for efficient convergence. Performance evaluation is conducted using Mean Average Precision (mAP) metrics across multiple IoU thresholds. Additionally, Non-Maximum Suppression (NMS) is applied for refining detection outputs. Experimental results demonstrate that DisasterVision AI provides fast, scalable, and reliable damage assessment, making it a valuable tool for disaster management authorities and emergency response teams.

Keywords Disaster Damage Assessment; Deep Learning; Satellite Imagery; Object Detection; SSD; xView2 Dataset


Paper ID IJIFR/V13/E8/023 Page No.: 1062-1069

Subject Area Computer Application & Engineering

Authors Sirisani Lavanya
M. Gowthami

Abstract Urban traffic congestion is a critical infrastructure challenge facing modern cities as vehicle populations expand and urban density increases. Conventional fixed-timing traffic signal systems are incapable of adapting to the stochastic and dynamic nature of real-world traffic flows, resulting in wasted green-light time, queue buildup, increased vehicle emissions, and emergency response delays. This paper presents TrafficOpt RL, an end-to-end adaptive traffic signal optimization system that applies the Deep Q-Network (DQN) algorithm to learn intelligent signaling policies at urban intersections through iterative simulation experience. The system is built on a custom Gymnasium-compatible simulation environment modeling a four-way intersection with stochastic Poisson vehicle arrivals. The DQN agent, implemented via the Stable-Baselines3 framework, utilizes experience replay, target network stabilization, and epsilon-greedy exploration to converge on policies minimizing aggregate vehicle waiting times and maximizing intersection throughput. All training metrics and simulation data are persistently stored in a MySQL relational database through automated callback logging, enabling systematic performance analysis. Evaluation via direct comparison against a fixed-timing baseline demonstrates measurable superiority of the reinforcement learning approach across three performance dimensions: average vehicle waiting time, total throughput, and composite efficiency score. Three analytical visualizations are generated to communicate system performance. TrafficOpt RL constitutes a practical proof-of-concept for deep reinforcement learning integration into intelligent transportation systems and smart city infrastructure.

Keywords Deep Reinforcement Learning; Traffic Signal Optimization; Deep Q-Network; Intelligent Transportation Systems; Adaptive Control


Paper ID IJIFR/V13/E8/022 Page No.: 1148-1153

Subject Area EDUCATION

Authors A. SARALA
Dr. J.E. MERLIN SASIKALA

Abstract The present study investigates the impact of blended learning on the academic performance and learning attitudes of higher secondary students. A total of 100 students from two higher secondary schools were selected through purposive sampling. The results reveal a noticeable difference in achievement scores between students exposed to blended learning and those taught through conventional methods, favouring the blended approach. Blended learning refers to the integration of classroom teaching with digital learning tools such as online discussions, multimedia content, and interactive activities. This approach supports flexible learning, enhances engagement, and allows students to learn at their own pace. The findings indicate that students who participated in blended learning demonstrated better academic outcomes compared to those in traditional settings. Therefore, incorporating blended strategies at the higher secondary level can significantly improve learning effectiveness and student engagement.

Keywords Blended Learning, Academic Achievement, Higher Secondary Students, Learning Effectiveness


Paper ID IJIFR/V13/E8/012 Page No.: 1036-1041

Subject Area Law

Authors S K Sahil

Abstract Section 69 of the Bharatiya Nyaya Sanhita, 2023 introduces a new offence addressing sexual intercourse obtained through deceitful means, particularly false promises of marriage and other inducements. While the provision aims to strengthen the legal framework for protecting women from sexual exploitation, it raises significant concerns regarding its interpretation and practical application. The section suffers from vague and ambiguous language, lack of clarity in defining “consent” and “deceitful means,” and the absence of comprehensive safeguards against misuse. It also reflects a genderbiased approach by recognizing only women as victims, thereby excluding men and LGBTQ+ individuals from its protection.This study adopts a doctrinal methodology, relying on primary sources such as statutory provisions and judicial decisions along with secondary sources including books, research articles, journals and credible electronic resources. Analytical, descriptive and exploratory approaches are used to critically examine the scope, limitationsand implications of the provision.The findings reveal that Section 69 of BNS, despite its progressive intent, creates legal uncertainty, difficulty in proving intention and potential for misuse, which may lead to inconsistent judicial outcomes. The study concludes that there is an urgent need for clearer judicial interpretation, precise legislative drafting, and balanced safeguards to ensure that the provision achieves its intended objective without compromising fairness and justice.

Keywords Free Consent, Deceitful Means, False Promise of Marriage, Misuse of Law, Gender Biasness, Misuse of Law


Paper ID IJIFR/V13/E8/011 Page No.: 1310-1321

Subject Area Finance

Authors Dr Pravin Choudhary
Harleen Kaur
Tina Sachdeva

Abstract This research examines the organizational transformation that followed the merger of Bank of Baroda with Vijaya Bank and Dena Bank on April 1, 2019, an integration that resulted in the formation of India’s third-largest public sector bank. The main objective is to analyze the various dimensions of organizational change by Merger & Acquisition, specifically focusing on the challenges and outcomes related to operation, human resource management, cultural alignment, and financial performance. Our study is based on primary and secondary data both. Secondary data which is taken from annual published report of selected bank web site and primary data which about the pre and post impact on organizational change which is collected through questionnaire distributed amongst the employees of selected bank. The study evaluates several key parameters, including Gross NPA, Net NPA, Net Interest Margin, Return on Assets, and Return on Equity. In addition, it examines the work culture and work environment of employees in the selected bank. We use the statistical tool for analysis is Mean, Standard deviation and also t-test and chi-square test used for hypothesis testing. We conclude that the impact of merger and acquisition on organization are positive.

Keywords Organizational Change, Gross NPA, Net NPA, ROA, ROE and NIM


Paper ID IJIFR/V13/E8/010 Page No.: 1042-1050

Subject Area Humanities (English)

Authors Dr. Vaishali Kiran Ghadyalji

Abstract Human nature is fundamentally malevolent as it is very natural for human beings to be selfish and narcissists. Humans are born with greed, envy and jealousy; and these innate feelings make them indulge into different kinds of immoral activities. On the contrary, their benevolence originates from the cognizant and constant imbibing of moral values. Many philosophers and scholars right from Aristotle to Thomas Hobbs, Charls Darwin, Sigmund Freud, Peter Muris, Herald Merckelbach, Henry Otgaar, David Coady, Lee Besham and a plethora have tried to explain and restate this fact. This paper is an attempt to explore the inexorable and unfortunate element of malevolence in human beings by juxtaposing the supposed villainous characters- Manthara from Ramayana and Shakuni from Mahabharata- with an analysis of their influential endevours to inspire their target audience to act according to their philosophy. Manthara, hunchbacked and old, was the confidante attendant and favourite maid of Queen Kaikeyi in the Indian epic Ramayana who instigated her to convince King Dasharatha to coronate Bharata in the place of Lord Rama and ask for fourteen years of exile to Lord Rama. Shakuni was the king of Gandhar and brother of Gandhari, the queen of Dhritrashtra and Hastinapur and the mother of hundred Kauravas in another Indian epic Mahabharata. Shakuni is delineated as one of the exceptionally intelligent characters in the epic but a very scheming one. This article endeavours to compare these two characters possessing malevolent traits, trying to establish that malevolence is not grounded in particular gender, position, era or such related aspects; and the malevolence of both of these characters does enclose certain grey shades.

Keywords human malevolence, scheming, disability, manipulation, malicious counsel


Paper ID IJIFR/V13/E8/007 Page No.: 1051-1061

Subject Area HRM

Authors Ms.Liza Alex
Prof.(Dr.)Mini Joseph
CA Reshma Rachel Kuruvilla

Abstract Purpose The objective of this article is to look into how competitive advantage and green banking practices associate with one another in the Indian banking industry. It considerably looks over how sustainable performance functions as a mediator in this relationship. This study (Siddik et al., 2024) aims to make clear how ecologically responsible actions result in strategic advantages for banks (Gunawan et al., 2022) by investigating the degree to which sustainable performance explains the impact of green banking practices on competitive advantage. Design/methodology/approach With the aim of examining the connection between eco-friendly banking practices and competitive advantage in the Indian banks, this quantitative study uses structural equation modeling with SmartPLS 4.0, with a focus on sustainable performance as a mediator. Convenient sampling will be used to gather the primary dataset from Indian private banks. The causal relationship between GBPs and CA will be assessed using SEM, and the mediating function of SP in this relationship will also be ascertained. Findings According to the analysis, the banking industry's competitive advantage is increased by putting green banking practices into practice. Additionally, the relationship between green practices and competitive advantage is strengthened by sustainable performance, indicating that green initiatives are more purposively effective when banks demonstrate strong sustainable performance. According to the partial mediation finding, banks gain from green banking practices in two ways: they directly increase their competitive advantage and improve sustainable performance (Siddik et al., 2024). Originality/value By learning the circumstances in which green banking practices improve a company's competitiveness, this study contributes to the expanding corpus of research on sustainable banking. It spotlights the significance of sustainable performance as a strategic tool for enhancing the advantages of green banking as well as a desired result (Siddik et al., 2024).

Keywords Green banking practices, sustainable performance, competitive advantage, mediation analysis, Indian banking sector.


Paper ID IJIFR/V13/E8/005 Page No.: 1028-1035

Subject Area Computer Engineering

Authors G.Yarasi
V. Lakshmi Narasimhan
D. Rammohan

Abstract This study presents a comparative evaluation of four deep learning architectures—CNN, ResNet50, VGG16, and InceptionV3—for chilli growth stage recognition and leaf disease classification. The models were assessed to determine their effectiveness in accurately distinguishing developmental stages and identifying disease conditions from image data. Experimental results demonstrate that transfer learning substantially improves classification performance across all architectures. Among the evaluated models, ResNet50 consistently achieved superior accuracy and overall performance in both growth stage and disease classification tasks. These findings highlight the effectiveness of deep transfer learning approaches in agricultural image analysis. Future research will focus on validating model robustness under real-field environmental conditions, designing lightweight architectures for mobile and edge deployment, and expanding the framework to support multi-crop classification systems for broader agricultural applications.

Keywords Chilli Image Classification, ResNet50, VGG16 & InceptionV3 Algorithms, Comparative Analytics


Paper ID IJIFR/V13/E8/003 Page No.: 1377-1383

Subject Area Commerce

Authors Reshmi R
Prof. Dr. Antha S

Abstract The role of paramedical professionals has drastically altered due to the increasing use of digital technologies in the healthcare settings. The workforce, who were only limited to assisting in medical roles, termed as paramedical professionals, are now expected to contribute as technologically competent members of modern healthcare systems. Besides presenting an overview of the new competencies expected of them, as well as the challenges faced in the process, this paper seeks to discuss the effects of digital technologies on the roles of paramedical professionals. The premise of this study is an overview of the existing research on workforce evolution and digital transformation in healthcare. According to the study, technologies such as telemedicine, artificial intelligence, the Internet of Things(IoT), and data-based healthcare systems are drastically altering role, functions and expectations, thus requiring skill development. Some of the challenges, however, include a lack of training opportunities, resistance to new technologies, and infrastructure challenges. The paper ends with an overview of how to improve the digital preparedness of paramedical professionals

Keywords Paramedical Professionals, Digital Transformation, Healthcare Systems, Skill Development, Workforce Evolution








About IJIFR

The International Journal of Informative & Futuristic Research (IJIFR) is a multidisciplinary peer-reviewed Online open access research journal published monthly. It delivers multidisciplinary platform in order to have extreme, accurate, genuine, brainstorming, speculative, intellectual discussion and which has the visualization to understand, comprehend industrial experiences that describes significant advances of changing global scenarios. All the Authors will get Hardcopy of Certificates for Publication free of cost. IJIFR is dedicated to increasing the depth of the subject across disciplines with the ultimate aim of expanding knowledge of the subject. The journal follows a Blind-Review Peer Review System in order to bring in a high-quality intellectual platform for researchers across the world thereby bringing in total transparency in its journal review system. Authors are solicited to contribute by submitting articles that illustrate high quality theoretical and experimental research results, projects, case studies, reviewed work, analytical and simulation models, technical notes and industrial experiences that describe significant advances in research area. IJIFR provides an opportunity to present the innovative and constructive ideas and the outcome of the on-going research in all the areas of research studies in context of changing global scenarios. This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.


Special Issue

The International Journal of Informative & Futuristic Research (IJIFR) special issue welcomes proposals for new and recurring National Conferences, International Conferences, National Seminars, Workshops conducted by colleges, universities, engineering & management institutes etc. The first aim is to provide opportunities for academics from a range of disciplines and countries to share their research both through the conference podium and IJIFR double-blind refereed publications. Proposals will be selected to ensure the conference program offers a comprehensive, non-commercial, objective, and diverse treatment of issues related to the core concepts of the subject’s related to title, IT Organizational Domains, and IT Hot Topics. Attention will be given to diversity of institutions, presenters, and geographic location. It is one of the excellent services offered by IJIFR that is uniquely intended to support the researchers and conference organizers. IJIFR provides conference organizers a privileged platform for the publishing of research work presented in conference proceedings. The journal is deliberated to disseminate scientific & basic research and to establish long term International collaborations and partnerships with academic communities and conference organizers. We invite you to submit proposals on any topic related to the broad set of research and application areas covered the by IJIFR. The Conference examines the concept of diversity as a positive aspect of a global world and globalised society. Presenting at conferences is an efficient and exciting forum in which you can share your research and findings.


Focus & Scope

The journal welcomes the researcher and authors from all parts of the world to provide their latest outstanding developments and state-of-the-art research work for the publications of high quality papers having research results either experimental or projected application in their related fields. IJIFR publishes original materials concerned with the theoretical underpinnings, efficacious application, and potential for evolving technology integration in a global range at all edification levels. It aims to guide the society to formulate and reinvent education, and to be the cutting-edge of knowledge, modernization, erudition, and innovation. Papers submitted for publication are selected & peer reviewed through full double - blind international refereeing process to ensure inventiveness, uniqueness, originality, relevance, and readability. Our reviewers are highly qualified academics and industrialists experts who ensure that only quality research should be published by IJIFR Journals. Articles submitted to the journal should meet international standards and must not be under consideration for publication elsewhere.


Editorial Policy

The editors ensure that this journal will be regularly published, widely read and circulated, have high impact and attract an adequate supply of high-quality papers from an international range of authors worldwide. Any selected referee who feels unqualified to review the research reported in a manuscript or knows that its prompt review will be impossible should notify the editor and excuse himself from the review process. Any manuscripts received for review must be treated as confidential documents. They must not be shown to or discussed with others except as authorized by the editor. An editor should evaluate manuscripts for their intellectual content without regard to race, gender, sexual orientation, religious belief, ethnic origin, citizenship, or political philosophy of the authors. Double blind reviews will be executed to ensure that biases in the process of evaluating manuscripts.


All articles published Open Access will be immediately and permanently free for everyone to read and download. Manuscripts should follow the style of the journal and are subject to both review and editing. IJIFR is multidisciplinary in nature so the topics are not limited to the list that is available. IJIFR will generally publish the research papers in the field as follows:


SOCIAL SCIENCE AND HUMANITIES, SOCIOLOGY, SOCIAL WELFARE, ANTHROPOLOGY, RELIGIOUS STUDIES, VISUAL ARTS, POLITICAL, CULTURAL ASPECTS OF DEVELOPMENT, TOURISM MANAGEMENT, PUBLIC ADMINISTRATION, PSYCHOLOGY, PHILOSOPHY, POLITICAL SCIENCE, HISTORY, EDUCATION, WOMEN STUDIES, BUSINESS AND MARKETING, ECONOMICS, FINANCIAL DEVELOPMENT, ACCOUNTING, BANKING, MANAGEMENT, HUMAN RESOURCES, SCIENCE AND ENGINEERING, TECHNOLOGY AND INNOVATION, ENVIRONMENTAL STUDIES, CLIMATE CHANGE, AGRICULTURAL, RURAL DEVELOPMENT, URBAN STUDIES, BIOTECHNOLOGY, HOTEL AND TOURISM, ENTREPRENEURSHIP DEVELOPMENT, BUSINESS ETHICS, DEVELOPMENT STUDIES, ASTRONOMY AND ASTROPHYSICS, CHEMISTRY, EARTH AND ATMOSPHERIC SCIENCES, PHYSICS, BIOLOGY IN GENERAL, AGRICULTURE, BIOPHYSICS AND BIOCHEMISTRY, BOTANY, ENVIRONMENTAL SCIENCE, FORESTRY, GENETICS, HORTICULTURE, HUSBANDRY, NEUROSCIENCE, ZOOLOGY, COMPUTER SCIENCE, ENGINEERING, ROBOTICS AND AUTOMATION, MATERIALS SCIENCE, MATHEMATICS, MECHANICS, STATISTICS, HEALTH CARE & PUBLIC HEALTH, NUTRITION AND FOOD SCIENCE, PHARMACEUTICAL SCIENCES ETC.


Paper Submission

Submission of paper to this journal proceeds totally online and you will be guided stepwise through the creation and uploading of your files. All correspondence, including notification of the Editor's decision and requests for revision, takes place by e-mail removing the need for a paper trail. Clearly indicate who will handle correspondence at all stages of refereeing and publication, also post-publication. Ensure that phone numbers (with country and area code) are provided in addition to the e-mail address and the complete postal address. Contact details must be kept up to date by the corresponding author. Those papers that do not reach the required standards of quality and rigor demanded by the journals, in terms of theoretical framework and methodology, will not be accepted for publication. Full papers should be submitted electronically via the ijifr website i.e.www.ijifr.org or by directly mailing to the editor at [email protected] or [email protected] in accordance with the author’s guidelines and paper format of this journal. The entire paper should be created in one document in Word format (.DOC or .DOCX). The first page is the title page, showing the full title, author‘s name, position, affiliation, and present address. Also, include an e-mail address for editorial correspondence. If there is more than one author, please indicate with an asterisk (*) the author who should receive correspondence.


Important Dates
Important Dates For Every Issue
Last Date of Paper Submission : 20th of Every Month
Acceptance Notification : Within 7-10 days after paper submission
Final Paper Acceptance Notification up to: 27th of Every Month
Online Publication of Papers: 30th of Every Month
Important Dates
IJIFR MAY 2026 EDITION (CONTINUOUS 154TH EDITION)
VOLUME 13, ISSUE 9,MAY 2026
FINAL ORIGINAL PAPER SUBMISSION TILL
27-MAY-2026
ACCEPTANCE NOTIFICATION
WITH IN 3-7 DAYS AFTER AUTHENTIC REVIEWED PROCESS
PAPER ID ACKNOWLEDGEMENT
ONLY AFTER ONLINE SUBMISSION
Join Editorial Board
Announcement
Scholarly Open Access, Peer-reviewed, and Refereed Journal, Impact factor 8.057 , AI-Powered Research Tool , Multidisciplinary, Monthly, Indexing in all major database & Metadata, Citation Generator, Digital Object Identifier(DOI), Plagiarism Reports Generation, Scientific & Analytic Review System
Nominal Processing & Publishing Charges (Includes online publication, Indexing & Abstracting in various online repositories, hard copy of certificate to each author, hard copy of Paper& letter of acceptance in journal cover.
All manuscripts are reviewed in fairness based on the intellectual content of the paper regardless of gender, race, ethnicity, religion, citizenry nor political values of author(s).the IJIFR provides free access to research information to the international community without financial, legal or technical barriers. All submitted articles should report original, previously unpublished research results, experimental or theoretical, and must not be under consideration for publication elsewhere. All the accepted papers of the journal will be processed for indexing into different citation databases that track citation frequency/data for each paper. Contributions will therefore be welcomed from practitioners, researchers, scholars and professional experts working in private, public and other organizations or industries.
High Impact Factor Journal with publication of more than 4200+ satisfied authors worldwide. IJIFR is indexed by Scientific Journal Impact Factor : 6.051, Index Copernicus Value(Icv): 6.62, Academia, Internet Archive, Techrepublic, Citeactor, Scribd, Jstor, World Cat, Road, Google Scholar, Slideshare, Jour Informatics, Genamics, Biblioteca, Scientific Research Indexing, LABII, ISRA, I2OR, Journal Index. Net, Newjour, Figshare, Citesser X , Open Access Journals, References*,Research Info, OAJI Indexing, E LIS, Genamics
IJIFR is approved by UGC & National Institute of Science Communication and Informational Resources (NISCIR), Delhi, India. Nominal Processing & Publishing Charges (Includes online publication, Indexing & Abstracting in various online repositories, hard copy of certificate to each author, hard copy of letter of acceptance (Original Papers Only) with additional benefits.
Authors are advised to submit authentic research work only.
Please feel free to contact us at [email protected] or [email protected]