Software Research Lab
Professor, Department of Software Engineering / Computer Science,
University of Saskatchewan, Canada
Title: A Machine Learning Based Framework for validating code clones from Big Code
A code clone is a pair of code fragments, within or between software systems that are similar. Since code clones often negatively impact the maintainability of a software system, several code clone detection techniques and tools including those Big data clone detectors have been proposed. However, the clone detection tools are not always perfect and their clone detection reports often contain an unknown number of false positives or irrelevant clones from specific project management or user perspective. Such issues even become more crucial for Big Data clone detectors where a clone detection tool detects millions of clones from Big Data software repositories. A clone detector may detect thousands of false positive clones from Big Code which is impossible to validate by human, let alone doing it accurately. Furthermore, to detect all possible similar source code patterns in general, the clone detection tools work on the syntax level while lacking user-specific preferences. This often means the clones must be manually inspected before analysis in order to remove those false positives from consideration. This manual clone validation effort is very time-consuming and often error-prone even for an accurate clone detection tool. In this talk, we propose a machine learning approach for automating the validation process. First, a training dataset is built by taking code clones from several clone detection tools for different subject systems and then manually validating those clones. Second, the trained model is further refined with millions of validated clone pairs from BigCloneBench Dataset, a clone benchmark with over 8.5 million clone pairs from 25K Java projects. Third, several features are extracted from those clones to train the machine learning model by the proposed approach. The trained algorithm is then used to automatically validate clones without human inspection. Thus the proposed approach can be used to remove the false positive clones from the detection results, automatically evaluate the precision of any clone detectors for any given set of datasets, evaluate existing clone benchmark datasets, or even be used to build new clone benchmarks and datasets with minimum effort. In an experiment with clones detected by several clone detectors in several different software systems, we found our approach has an accuracy of up to 87.4% when compared against the manual validation by multiple expert judges. The proposed method also shows better results in several comparative studies with the existing related approaches for clone classification. I will also talk about how classical clone detection tools could be scaled for Big Data with the validation framework. I will also talk about a couple of other applications of machine learning and Big Data Analytics in Software Engineering, in particular I will talk about the application of Big Data analytics for bug localization and concept location.
Chanchal K. Roy is Co-Director of Software Research Lab and Professor of Software Engineering/Computer Science at the University of Saskatchewan, Canada. He is the lead and Program Director of an NSERC CREATE graduate program on Software Analytics Research and a co-lead of the Data Management and Repository group of an NSERC Canada First Research Excellence Fund (CFREF) on Food security. As the co-author of the widely used NiCad code clone detection system, he has published more than 180 refereed publications, with many of them in premier software engineering conferences and journals that have been cited more than 8,000 times. Dr. Roy works in the broad area of software engineering, with particular emphasis on software clone detection and management, software evolution and maintenance, recommender systems in software engineering, automated software debugging, and big data analytics in software engineering. His contributions to the software maintenance community, and particularly to the software clones community, have been highly influential, winning Most Influential Paper (a.k.a. Test of Time awards) awards at SANER 2018, ICPC 2018 and SANER 2021. He has been recognized with the New Scientist Research Award of the College of Arts and Science of the University of Saskatchewan and the University wide New Researcher Award. He is one of three Canadian computer scientists honoured with a prestigious award for young researchers, a 2018 Outstanding Young Computer Science Researcher Award by CS-Can/Info-Can, a national, non-profit society dedicated to representing all aspects of computer science and the interests of the discipline across Canada. Dr. Roy was a vision keynote speaker at WCRE/CSMR 2014 on software clones, and a keynote speaker at both IWSC 2018 and IEEE R10HTC 2018. He serves widely on the program committees of major software engineering conferences such as ICSE, ASE, ICSME, SANER, MSR, ICPC and SCAM, and has been regular reviewers of the major journals in Software Engineering. He served (has been serving) as chairs and/or program committee members in most of the conferences of his area including General Chair for ICPC 2014, SCAM 2019, IWSC 2015 and Program Co-chairs for ICPC 2018, IWSC 2012 and ISEC 2022. He has attracted over $4M in external funding since joining the USask, including an NSERC Discovery Accelerator Supplement Grant, NSERC CREATE grant and leading major roles in two CFREF grants in Food Security and Water Security. Dr. Roy’s recent work on a new way of searching Stack Overflow was featured in Stack Overflow blogs which then subsequently was featured in most of the major tech news websites and blogs such as ACM Tech news, TechRepublic, Hacker News, SD Times, and reddit.
Institute of Continuing Education
Faculty of Science and Technology
American International University Bangladesh, Bangladesh
Title: IoT and OT Hacking
Md Manirul Islam is an Associate Professor of Computer Science and Director of Institute of Continuing Education and IT at the American International University-Bangladesh (AIUB). He is the lead architect of his university’s Data Center and Network Infrastructure. Mr. Islam is a Member of the Cisco Networking Academy’s Global Advisory Board. He holds several industry certifications in the track of networking and system administration. He holds a CCNP Certification and an award-winning Instructor Trainer for IT Essentials, CCNA, CCNA Security, Cybersecurity Operations, DevNet, IoT Security, IoT and Big Data Analytics. Mr. Islam has several journal publications, and his research interest lies in the areas of Quantum Networking, IoT, and Big Data Analytics.
Department of Electrical & Electronic Engineering, University of Dhaka, Bangladesh
and Specially appointed Associate Professor
Osaka University, Japan
Website: https://du.ac.bd/faculty/faculty_details/APE/1421, http://AhadVisionLab.com
Title: How to Review & Reply: Few Points for the Preparation for Revised Submission
This tutorial will cover some important points regarding how reviewers judge or review your works, how to do rebuttal, and prepare for a revised submission or final camera submission. This session will provide some examples from various good journals. Note that these topics vary from one journal/conference to another. However, the tutorial will illustrate most of the core and essential points. It will allow a researcher some ideas on how to do a better review – and through this manner, a researcher can learn on how his/her paper will be judged by reviewers. Authors need to address these points carefully so that a paper can be accepted easily. During the rebuttal, some basic steps are there to address. If a revised submission is not covering the points by the reviewers, it may be rejected. Also, for camera-ready submission – there are a series of steps to deal. This tutorial will address some examples based on some journals and conferences.
Md Atiqur Rahman Ahad, SMIEEE, SMOSA; Professor, University of Dhaka (DU); Specially Appointed Associate Professor, Osaka University. He studied at the University of Dhaka, University of New South Wales, and Kyushu Institute of Technology. His authored/edited 10 books in Springer, e.g., “IoT-sensor based Activity Recognition”; “Motion History Images for Action Recognition and Understanding”; “Computer Vision and Action Recognition”. He published 180+ journal/conference papers, chapters, 130+ keynote/invited talks, 35+ Awards/Recognitions. He is an Editorial Board Member of Scientific Reports, Nature; Assoc. Editor of Frontiers in Computer Science; Editor of Int. Journal of Affective Engineering; Editor-in-Chief: IJCVSP http://cennser.org/IJCVSP; Guest-Editor: PRL, Elsevier; JMUI, Springer; JHE; IJICIC; Member: ACM, IAPR. More: http://AhadVisionLab.com
Data Scientist, Copyright Australia
Casual Academic, James Cook University, Australia
Former Academic, Sydney University, Australia
Course Content writer of Python, TAFE NSW.
Title: Visualization of Big Volume of Data using R
This current world is driven by data. Every second a huge amount of data been generated in the net. Almost every company and organisation in the world is directly and indirectly dependent on data. Data plays a vital role for every company and is considered as a valuable asset for them. Data visualisation is one of the important technique to translate this data and get the insight of it. Data visualisation is important process of data science to collect, process, model the data. On the other hand when we think to visualise the data, we have to consider the volume, velocity and variety of data. Sometime it is also important to consider the dimension of the data. This is a vital issue to select the proper tools to visualise the data, since there are various issues is involve on it. R/RStudio is one of the important tool to visualise the large volume and multiple diminution of the data. This hands on session will be very helpful to see various technique to visualise large volume of data using R/RStudio and ggplot2.
Mohammed Golam Zilani graduated from the CSE department of Khulna University and did his Masters from the University of Sydney in Data Science. He is currently working as a Data scientist in the Copyright Australia. He is also working as a casual academic in Master of data science of the James Cook University Australia. He was a former Academic in the University of Sydney and The content writer of Python course of TAFE(NSW). Zilani has been working in the IT industry for the last 20 years. In the last 5 years, he is working in the Data Science space. Zilani’s main expertise are in python, R, Data Visualisation, Cloud computing and Big Data. His research interests are in the area of machine learning and data mining. He is the initiator of one of the well-known YouTube Channel to learn data science and involved in various mechanisms of Smart village and smart city studies. He is a member of Computer Society Australia.
Department of Software Engineering
Computer Science, University of Saskatchewan, Canada
Title: Human Centric Machine Learning-based Bug Inducing Commit Detection Models and Their Adoption in IDEs
Detecting Bug Inducing Commit (BIC) or Just in Time (JIT) defect prediction using Machine Learning (ML) based models requires tabulated feature values mainly extracted from the source code or historical maintenance data of a software system. Existing studies have utilized GitHub Statistics (GS), n-gram based source code text processing, and developer’s information as the features in ML-based bug detection models. In this study, we extracted software developers’ coding styles/ patterns to represent commits and investigate whether they are helpful to detect bugs in software systems. While JIT use of BIC algorithm can reduce the bugs in software and reduce maintenance cost, their adoption by the software developers is not significant as there is no easy way to integrate them in Integrated Development Environments, such as Visual Studio or PyCharm. Therefore, we designed and implemented a usability pattern that helps developers integrate the algorithms easily in an IDE. Our user study shows that the pattern is promising for improving the usability of Visual Studio.
Dr. Banani Roy is Assistant Professor of Computer Science and Director of Interactive Software Engineering Lab at USask. She works with graduate and undergraduate students and postdocs/RAs in building a cloud framework to support multidisciplinary scientists for large-scale data analysis. She is part of GWF’s Core Computer Science Team where she plays a key role in migrating a legacy water modelling system into a modern programming environment. Her research interests are Engineering Interactive Systems, Collaborative Scientific Workflow Management Systems and Big Data Analytics. She has received various research grants including USask’s PNSERC, NSERC CREATE, NSERC Discovery and Compute Canada RAC for P2IRC project.
Lecturer in Computing and Security
School of Science
Academic Centre of Cyber Security Excellence (ACCSE)
Edith Cowan University, Australia
Title: Secure Edge Computing: Applications, Techniques and Challenges
The internet is making our daily life as digital as possible and this new era is called the Internet of Everything (IoE). Edge computing is an emerging data analytics concept that addresses the challenges associated with IoE. More specifically, edge computing facilitates data analysis at the edge of the network instead of interacting with cloud-based servers. Therefore, more and more devices need to be added in remote locations without any substantial monitoring strategy.
This increased connectivity and the devices used for edge computing will create more room for cyber criminals to exploit the system’s vulnerabilities. Ensuring cyber security at the edge should not be an afterthought or a huge challenge. The devices used for edge computing are not designed with traditional IT hardware protocols. There are diverse-use cases in the context of edge computing and Internet of Things (IoT) in remote locations. However, the cyber security configuration and software updates are often overlooked when they are most needed to fight cybercrime and ensure data privacy. Therefore, the threat landscape in the context of edge computing becomes wider and far more challenging.
There is a clear need for collaborative work throughout the entire value chain of the network. This talk will address the cyber security challenges associated with edge computing, provide a bigger picture of the concepts, techniques, applications, and open research directions in this area.
Dr. Mohiuddin Ahmed is currently working as Lecturer of Computing and Security in the School of Science at Edith Cowan University. Mohiuddin has been working in the areas of data analytic and cyber security, in particular false data injection attacks in Internet of Health Things (IoHT) and Internet of Flying Things (IoFT). His research projects are funded by different external agencies. He has edited books on data analytics, security analytics, blockchain and other contemporary issues. He has also engaged with media outlets such as newspaper, magazine, The Conversation etc. He is also an ACM Distinguished Speaker, Australian Computer Society Certified Professional and a Senior Member of IEEE.
School of Information Technology
York University, Canada
Title: : Natural Language Interactions with Data Visualizations
Analyzing a large amount of data is at the heart of many decision-making tasks. However, analytics is primarily accessible to people who knows how to apply different statistical and machine learning methods as well as visualization techniques. Most people are unable to make sense of large datasets mainly because they are not experts in data science and analytics. In this talk, I will describe how can we address these challenges by taking an innovative, interdisciplinary approach, combining data visualization, and human-computer interaction with natural language processing. The goal is to support a diverse range of users with different levels of skills in performing analysis on large datasets faster and more effectively through natural language. In response, the system leverages machine learning to understand the information needs expressed in natural language and to generate answers for effective comprehension. We will demonstrate several systems that support natural language interactions, enabling a broad range of users to effectively analyze and get insights from large datasets. I will conclude the talk with an overview of ongoing and future works on building automatic data storytelling tools and accessible data visualizations.
Dr. Enamul Hoque Prince is an Assistant Professor in the School of Information Technology at York University. Before joining York University, he was a postdoctoral fellow in the HCI group at Stanford University. His research addresses the challenges of the information overload problem using an interdisciplinary lens, combining information visualization and human-computer interaction with natural language processing. Enamul completed his Ph.D. in Computer Science from the University of British Columbia. He has conducted research on visual analytics at Tableau Software and the Qatar Computing Research Institute. His work has appeared in top journals and conferences including IEEE TVCG, ACM TIIS, ACM CHI, ACM UIST, and ACL. His research has been funded by NSERC Canada, National Research Council Canada, and York University among others.
Senior Research Data Scientist
Big Data Institute
University of Oxford, UK
Title: Machine Learning and Bioinformatics Models for the Diagnosis, Prognosis and Outcome Predictions of Comorbidities
To improve the COVID-19 patient’s outcomes we need to identify patients at high risk of severity and mortality as early as possible, i.e., before they develop severe symptoms. How do we use available patient clinical information (symptoms, pre-existing diseases and blood testing results) to predict likely patient disease severity? We have developed machine learning, deep learning and bioinformatics models to use clinical information, medical image and bioinformatics data (transcriptomics, WGS and GWAS) to characterise COVID-19 patients and predict those most vulnerable to complications. We also modelled their chance of developing comorbid diseases and “long COVID”, and identified those susceptible to adverse effects of SARS-Cov2 vaccination. We also identified potential therapeutic targets and pathobiological pathways linking major COVID-19 comorbidities and existing compounds that may reduce the severity and complications of SARS-CoV-2 infections. Although medical staff already make such predictions with patient data, the experience and skill levels vary for new diseases so Digital Health and Machine Learning based approaches have great potential to improve prediction accuracy and to develop a decision support system.
Dr. Mohammad Ali Moni is an Artificial Intelligence and Digital Health – Senior Lecturer (Research) at the University of Queensland, Australia. Before joining Queensland he worked at different world top universities including Oxford University, Cambridge University, UNSW Sydney, and Sydney University. He received his PhD in Machine Learning, Data Science and Health Informatics from the University of Cambridge, UK. His research interests encompass artificial intelligence, machine learning, digital health & health informatics, health data science, and clinical bioinformatics. He has been awarded several fellowships and awards including the Sydney University Vice-chancellor Fellowship, research/best paper awards, and scholarships including a Commonwealth Cambridge scholarship. He has published over 150 journal articles in top tier journals including The Lancet.
Research Publishing – Books,
Interdisciplinary Applied Sciences,
Title: Elements of Book Publishing
The importance of research publishing can be defined by a simple quote of Gerard Piel, which says “Without publication, science is dead.” The story of book publishing started more than 1100 years ago and it is further revolutionized by the invention of first printing press in 1454 by Johannes Gutenberg. In the last 20 years, science and reporting of science have undergone revolutionary changes. Computerization and Internet have changed the traditional ways of reading and writing. Hence, it is very important for scientists and students of the sciences in all disciplines to understand the complete process of writing books and their types. A famous American author, Stephen Edwin King, has correctly expressed the beauty of writing with his quote “To write is human, to edit is divine.” The talk is designed to provide information on different elements of book publishing.
Mr. Aninda Bose is presently working as Executive Editor with Springer Nature. Mr. Bose is part of the Global Acquisition Team at Springer Nature and responsible for acquisition of scientific content across the globe. He is responsible for acquisition of content in Interdisciplinary Applied Sciences. He has more than 26 years of industrial experience in marketing and different fields of publishing. Mr. Bose has completed Masters in Organic Chemistry from Delhi University and Masters in Marketing Research from Symbiosis Institute of Management Studies, Pune. Mr. Bose has delivered more than 160 invited talks on Scientific Writing and Publishing Ethics in reputed Universities, International Conferences and Author Workshops. He has published books for secondary level in Chemistry and is a member of American Chemical Society, USA.
Senior Project Manager
Pusan National University
Title: Big data and impact of digital transformations in harsh environments
Big Data refers to large amount of data sets whose size is growing at a vast speed making it difficult to handle such large amount of data using traditional software tools available. This big data revolution, which encompasses techniques to capture, process, analyse visualize large datasets in a rapid timeframe, has led to significant advances in data analyses in various environment Here we discuss the trends emerging from these environmental analyses and propose a way forward to harness these technologies to mitigate harsh environment declines.
This talk reviews the big data and its background in the harsh environment. The paper explains big data and review on five phases of the value chain of big data (Components) such as, the quantity of data (Volume), the rate of data generation and transmission (Velocity), the types of structured, semi-structured data and unstructured data (Variety), the important results from the filtered data (Value) and the trust and integrity (Veracity). Then focus on classification based on five categories: Data stores, Content format, Data sources, Data processing, and Data staging. As a consequence the discusses the application of big data with technologies, such as Hadoop, Cloud Computing, and Internet of Things (IoT). These considerations aim to provide a complete overview and a big-picture of impact of digital transformation in harsh environments.
Dr. Kanika Singh is a Senior Project Manager in American Bureau of Shipping. She handled several marine and offshore projects. Dr. Singh is the Electrical and Instrumentation working group leader for Unified bulk Offshore Standardization project (UBJIP) for Offshore Standardization project. The significance and benefit criteria for standardization focus on cost, weight, construction efficiency, compatibility, safety requirements and operational maintenance.
Dr. Singh obtained her Ph.D in Engineering from Pusan National University, South Korea, the Masters in Engineering in Electrical & Instrumentation from Indian Institute of Technology (IIT), Delhi, India and University of Karlsruhe, Germany (under DAAD Fellowship program). She has over ten years of experience and have travelled US, Canada, Europe and Asia for work. Dr. Singh is the recipient of Brain Korea (BK21) Award (2008), Korean Research Fellowship (KRF) award (2005 to 2008), IEEE Outstanding Young Engineer Award (2005-2006), DAAD (Deutscher Akademischer Austauschdienst) German government fellowship award (2001 to 2002), Outstanding research paper award at 7th Cross Straits Symposium on Material Energy and Environmental Sciences at Kyushu University, Japan (2005) and POSTECH, Korea (2006), UNESCO CCAP contribution award (2006), Visiting International Scholar KU-Leuven, Belgium(1999) and Best paper Awardee from Region10, at IEEE Int. Conf. at Atlanta USA, (1998). She is a Senior Member of IEEE, USA (highest grade of IEEE members for their distinguished contribution) and IEEE Women in Engineering. She served as Vice Chair. She was selected and was recipient of Student sponsorship to attend Second IEEE-EMBS International Summer School and Symposium on Medical Devices and Biosensors (ISSS-MDBS) with 45 attendees at the Chinese University of Hong Kong, 26 June– 2 July 2004.
Dr. Singh has more than 42 research publications and over six invited talks. She has taught as Asst. Professor and guided research to four scholars who have been awarded Master. She was one of the selected candidates from 55 countries for International Scientific and Instrument Technology Center (ITRC) workshop, Taiwan (2013). She was invited speaker by International Oil and Gas Producers for IOGP/IEC Electrical subcommittee (comprising of major oil & gas industry experts), London (2017).She has been selected for nomination as the Electrical expert by the National committee of Korea (KATS)for IEC/TC18 and will review the revision of IEC61892 Electrical Installations for ships, mobile and fixed offshore units. She was awarded with best paper award by IEEE, IEC and KATS at IEC General Meeting 2018 at Bexco, Busan. She was presenter at OTC (Offshore Technology Conference 2019), in Houston 6-9May, 2019.
European Molecular Biology Laboratory (EMBL)
Title: A Quantitative Approach to Study Human Cell Division
Cell division is a fundamental process of life where a mother cell divides into two and passes its genetic information to its daughter cells. Faithful cell division is crucial in development cycle where a fertilized mammalian egg transformed into a sexually mature adult. During embryonic development equal division of a cell (number of chromosomes should be equal in two daughter cells) is important to maintain pregnancy towards a successful birth and an unequal division may explain embryo’s failure to maintain the pregnancy. In later stages, cell division is guided by a tight spatiotemporal coordination of hundreds of genes called essential mitotic genes.
To understand embryonic cell division as well as to identify potential causes of very high rate of pregnancy loss, we employed microscopy techniques to observe the start of mammalian life for long duration. This process produces a huge volume of image data. We developed sophisticated computation pipelines to quantify and track mitotic progression to understand the process and potential source of mitotic errors. This study resulted a fundamental breakthrough showing that mammalian life begins differently than we thought. To understand the role of essential mitotic genes, high-resolution live cell imaging (3D+time) of individual protein encoded genes was used to capture their sub-cellular localizations and thus generating a very large image dataset. We established a sophisticated computational framework to segment and track cells and nuclei and standardize them to integrate protein distributions in space and time. This work created the first dynamic alas of human cell division (https://www.mitocheck.org/mitotic_cell_atlas/) which revealed many biological insights to understand cell division in a comprehensive manner. This talk will mainly focus on the computational aspects of the studies and demonstrate the power of bioimage informatics approaches to answer fundamental biological questions.
Dr. M. Julius Hossain is currently working as a Research Scientist in the Cell Biology and Biophysics Unit at the European Molecular Biology Laboratory (EMBL), Germany. Dr. Hossain was trained as a computational scientist and obtained his PhD degree in computer vision and image processing from Kyung Hee University, South Korea. He conducted his postdoctoral research in biomedical image analysis in Dublin City University, Ireland and then moved to EMBL where he has been very active in interdisciplinary research for many years. Dr. Hossain actively collaborates with scientists in several other disciplines including biology, physics, chemistry, and bioinformatics to develop computational methods for quantifying and modeling biological systems specially the dividing human cell and mammalian embryo. His research outputs were published in a number of most prestigious journals including Nature, Science, Nature Cell Biology, Nature structural and Molecular Biology, Nature Communications and Nature Protocols, eLife etc. Dr. Hossain had spent more than a year in the industry as a Software Engineer before he started his academic career at the Department of Computer Science and Engineering, University of Dhaka, Bangladesh where he served as a full time Lecturer and then Assistant Professor. His current research focus includes bioimage informatics, modeling cellular and embryonic shapes, parameterization of dynamic protein distributions in the dividing human cell.
Chair in Cyber Security
Professor, Department of Computing and Mathematics
Manchester Metropolitan University,United Kingdom
Title: Blockchain as a Trustless Security Architecture for Intelligent Critical National Infrastructure
One of the key enabling technologies of smart cities is the Internet of Things (IoT). In recent years, IoT has developed into many areas of application including critical national infrastructure (CNI) such as transport, hospitals and power distribution grid. CNI systems depend heavily on IoT devices to perform autonomous actions or inform human decision makers. The proliferation of IoT applications raised much serious security and privacy concerns. Recently, blockchain has been advocated as a solution for secure data storage and sharing. In this talk, I will start by giving a sneak preview of blockchain technology. Then, I will outline how to implement blockchain as a fundamental theory for trustless security for connected CNI. The discussion will investigate technologies that can be utilised to achieve a trustless matrix such as blockchain and peer-distributed security systems, for instance, onion-routing, with the wider aim of defining trustless security further. The talk also considers the feasibility of trustless IoT security systems and their application in CNI.
Mohammad Hammoudeh is a Professor (Chair) of Cyber Security in the Department of Computing and Mathematics at the Manchester Metropolitan University. He is the founder and Editor in Chief of ACM’s journal on Distributed Ledger Technologies: Research & Practice. Mohammad heads the CfACS Internet of Things Lab he founded in 2016 where he leads a multi-disciplinary group of research associates and Ph.D. students. From this, he established the Lab as a leading research hub with a broad portfolio of successful, industry-sponsored projects. Mohammad has been awarded above £2.5M in competitive research funding as Principal/Co-Investigator for 16 research projects. He has a global collaborative research network spanning the academic community, industry, policymakers and wider technology stakeholders in the field of cyber security, the Internet of Things and complex highly decentralized systems. He currently investigates ways of improving industry practice to allow for guaranteed security and distributed computing applications which work effectively every time. Throughout his 15 years research career, Mohammad developed significant insight and expertise into a number of computer science disciplines (such as blockchain, cryptography and Artificial Intelligence) adjacent to his area of specialism (distributed systems).
Department of Computer Science and Engineering
Faculty of Engineering and Technology,
University of Dhaka
Former President, Bangladesh Computer Society (BCS)
Title: Quantum Computing: The Future of Big Data
In general, the great comparative advantage of quantum computers is to find optimal solutions to problems that have almost infinitely many variables-a huge number of moving atoms, for example. A very promising area is called “quantum annealing,” whereby you might calculate how every single atom in an air-flow moves over a hypothetical new type of flight wing. Based on similar principles, it is possible to optimally optimize traffic in a metropolis or data streams in an electronic network.
The quantum race is already underway. Governments and private investors all around the world are pouring billions of dollars into quantum research and development. Satellite-based quantum key distribution for encryption has been demonstrated, laying the groundwork for a potential quantum security-based global communication network. IBM, Google, Microsoft, Amazon, and other companies are investing heavily in developing large-scale quantum computing hardware and software.
Quantum computers will disrupt current techniques and solve previously unapproachable problems, creating valuable solutions for industry. For example, pharmaceutical companies could accelerate the discovery of new drugs, materials companies could discover new molecular structures, finance companies could develop new trading strategies, transportation companies could optimize logistics, and companies relying on the output of machine and deep learning could perform analyses that are impossible with classical computing of today.
Consider a few more industries that could benefit. Airlines seeking the optimal way to store spare parts at airports. Distribution centers wanting the best way to maneuver robotics around a warehouse. Oil and gas companies calculating how atoms and molecules can be configured to protect equipment from corrosion. In the future, quantum cryptography may also become common, due to its potential for truly secure encrypted storage and communications. That’s because it’s impossible to precisely copy quantum data without violating the laws of physics. Such encryption will be even more important once quantum computers are commonplace because their unique capabilities will also allow them to swiftly crack traditional methods of encryption as mentioned earlier, rendering many currently robust methods insecure and obsolete.
Recently, Google proclaimed that it had achieved quantum supremacy with its “Sycamore” quantum computer that can solve complex algorithms unsolvable by any other computer today. This milestone raises fundamental questions about how quantum computing can be used and how it will affect initiatives in the digital era. Every day, humans create more than 2.5 exabytes of data, and that number continues to grow, especially with the rise of the internet of things (IoT) and 5G capabilities. Machine learning (ML) and artificial intelligence (AI) are some of the ways to help manage and analyze data for competitive advantage, but continued innovation and the desire for meaningful insights may make data increasingly complex for organizations to collect and analyze. As classical binary computing reaches its performance limits, quantum computing is becoming one of the fastest-growing digital trends and is predicted to be the solution for the future’s big data challenges. Though quantum computing is still just on the horizon, the U.S. plans to invest more than $1.2 billion toward quantum information over the next 10 years in a race to build the world’s best quantum technology.
The future of quantum computing will bring enormous changes and challenges to our world. From how we secure our most critical data to unlocking the secrets of our genetic code, it’s technology that holds the keys to applications, fields and industries we’ve yet to even imagine.
Dr. Hafiz Md. Hasan Babu is currently working as the Professor of the Department of Computer Science & Engineering, University of Dhaka, Bangladesh. He was the former Chairman of the same department too. Professor Dr. Hassan Babu was also a Professor and the founder Chairman of the Department of Robotics and Mechatronics Engineering, University of Dhaka. In addition, he worked as the Pro-Vice Chancellor of National University, Gazipur, Bangladesh. Dr. Hasan Babu obtained his Ph.D. in Electronics and Computer Science from Japan under the Japanese Government Scholarship and received his M.Sc. in Computer Science and Engineering from Czech Republic under the Czech Government Scholarship.
Professor Dr. Hasan Babu is currently an Associate Editor of the famous research journal of the United Kingdom “IET Computers and Digital Techniques”. He was awarded the Bangladesh Academy of Sciences Dr. M.O. Ghani Memorial Gold Medal Award in 2017 for his excellent research work to the progress of Physical Sciences in Bangladesh.
Professor Dr. Hafiz Md. Hasan Babu published more than a hundred research papers. Three of his research papers got the best paper award. Professor Dr. Hasan Babu was also a member of Prime Minister’s ICT Task Force. At present, he is the President of International Internet Society, Bangladesh Chapter. He was also the President of Bangladesh Computer Society for the tenure 2018-2020.
He has been awarded the UGC Award-2017 in Mathematics, Statistics and Computer Science category for his research work on quantum multiplier-accumulator device.
Life Fellow, IEEE
Associate Editor, IEEE-TIM and Regional Editor of Int Journal of Biomedical Engineering and Technology.
National Physical Laboratory, New Delhi India
Title: Current Nano-Sensors and IoT Systems for Ubiquitous Health Care
Newer and newer sensors are being developed with the progress of science, day by day, for various industrial and biomedical applications. However, more advanced sophisticated sensors and systems are still required to be developed for better health care, with reliable quick diagnosis of a particular disease/abnormality in an intelligent manner as well as for therapeutic treatment of say cancer disease, well in time.The recent developments in Nano-sensors and Nano-systems are discussed here for the measurements for better healthcare applications, particularly for old age patients living in isolated areas.The fabrication aspects of new Nano-sensors and smart systems based on different sensing mechanisms are given.. Nano-chip based sensing systems like ultrasound on a chip, are described in detail. Main emphasis is placed on the development of IOT based and cloud-based Nano-sensors and smart systems for new clinical measurements, in the ubiquitous manner.Cancer nanotechnology and therapeutic treatment of deep seated brain tumors with high intensity focused ultrasound, are described, as case studies.Planning of U-health care program is presented with wireless sensor networking (WSN) in different environments, in an effective manner for better health cares.The present study would open a new area of research in the biomedical field.
PROF (DR.) V.R.SINGH, Life Fellow-IEEE, Associate Editor, IEEE-TIM and Regional Editor of Int Journal of Biomedical Engineering and Technology. Prof. (Dr) V.R.Singh, Ph.D. (Electrical Engg), IIT-Delhi and Life Fellow- IEEE and LF-IETE, LF- IEI, LF-ASI/USI and LF-IFUMB/WFUMB, has over 38 years of research-cum-teaching experience in India and abroad (Univ of Toronto-Canada, KU Leuven- Belgium, Korea Univ, South Korea, TU-Delft, Netherlands, Univ of Surrey/Southampton, UK, PTB-Germany and others). He has been at National Physical Laboratory (NPL), New Delhi, as a Director-grade-Scientist/ Head of Instrumentation, Sensors & Biomedical Measurements and Standards, as well as Distinguished (V) Professor (AICTE/INAE) jointly with Thapar University. He has over 350 papers, 250 talks, 260 conf papers, 4 books, 14 patents and 30 consultancies to his credit. Under his guidance, 35 PhD scholars have earned PhD degree while others are working with him. He is the Mentor/Advisor of PDM University. Dr. Singh has been the Associate Editor of IEEE Int Sensor Journal (2010-2016), and is Associate Editor of IEEE Transactions on Instrumentation and Measurements, Editorial Board Member, Biomedical Engineering Letters (BMEL) and Regional Editor of Int Journal of Biomedical Engineering and Technology (IJBET). Apart from this, he is on Editorial/Reviwer Boards of other journals. like Sensors & Actuators (Switzerland), IEEE Trans on Engg in Med and Biology , J Computers in Electrical Engg (USA), J.Instn Electr Telecom Engrs, J.Instn Engrs -India, Ind J Pure & Appl Physics, J.of Instrm Soc Ind, J. Pure & Appl Ultrasonics, J. Life Science Engg, etc.
He is the recipient of awards by INSA (Ind Natnl Sci Academy)1974, NPL 1973, Thapar Trust 1983, ICMR (Ind Council of Med Res) 1984; Japan Soc. Ultr in Medicine 1985, Asian Federation of Societies of Ultasound in Medicine & Biology 1987, IE-I(Institution of Engineers- India) 1988/1991, IEEE EMBS 1999 and IEEE-2010/2011/2014, Sir CV Raman Award by Acoustical Society of India / 2018, for his outstanding contributions. Presently, he is IEEE-EMBS-DL (Distinguished Lecturer), IEEENanoTechnology Council-DL and INSA-YSA-DL. He has served as Guest Editor of Special Issues of JASI on Physical Acoustics and Utrasonics (2016-17) and Medical Acoustics (2017-18) as well as on IETE Technical Review journal on Transducers(2002). He is the Chair of IEEE-EMBS/IMS-Delhi Chapter, Immediate past President of Acoustical Society of India and current Vice President of Ultrasonic Society of India and has been theVice President of Instrumentation Soc of India, Vice-President of IFSUMB, Secretary of IEEE India Council and the Chairman of IEEE-Delhi Section. Dr. Singh is a Member of IEEE Standards Association. He was also Council Member of WFUMB (Australia) Ultrasound Safety and Standards. He has served as the Chair or a Member of BIS Committee on Elctro-Medical Committee in the past and presently, he is the Chairman of BIS-MHD-15 Committee. He has been the session chair,plenary/keynote/ invited speaker and on advisory boards of world congresses and national/international conferences, world over. He has been the Conf Organiser of WESPAC-2018, Nov 10 to 15, New Delhi.
His main areas of interest are: sensors and transducers, biomedical instrumentation, biomedical standards, computer modeling and simulation, biomedical ultrasonics/medical acoustics, POCT devices, neuro-sensors/implants, nano-cancer-technology, cancer hyperthermia, tissue characterisation, lithotripsy, WSN and u-health care engineering.
Department of Computer Science
Auckland University of Technology, Auckland, New Zealand
Title: Deep Learning in Spiking Neural Networks for Spatio-Temporal Data: Methods, Systems, Applications
The talk presents first some background information about neural networks before it explains the principles of deep neural networks. I then talks about the third generation of artificial neural networks, the spiking neural networks (SNN) also named neuromorphic systems. They are not only capable of deep, incremental learning of temporal or spatio-temporal data, but also enabling the extraction of knowledge representation from the learned data and tracing the knowledge evolution over time from the incoming data, thus allowing the development of new types of explainable AI systems. Similarly to how the brain learns, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles of the brain.
This is lustrated on an exemplar SNN architecture NeuCube (free software and open source, along with a cloud-based version, are available from www.kedri.aut.ac.nz/neucube and www.neucube.io). Case studies are presented of brain and environmental data modelling and knowledge representation using incremental and transfer learning algorithms. These include: predictive modelling of brain and cognitive data; predicting environmental hazards and extreme events; image processing and computer vision. Hardware realisation of neuromorphic computational platforms is presented. These are massively parallel computers of thousands and millions of artificial neurons with low power consumption and ultra- high speed of processing.
It is demonstrated that brain-inspired SNN architectures, such as the NeuCube, allow for knowledge transfer between humans and machines through building brain-inspired Brain-Computer Interfaces (BI-BCI). These are used to understand human-to-human knowledge transfer through hyper-scanning and also to create brain-like neuro-rehabilitation robots.
This talk aims at establishing collaboration between the author and graduate students and staff at this University for future projects.
Reference: N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, Springer, 2019, https://www.springer.com/gp/book/9783662577134.
Professor Nikola Kasabov is Fellow of IEEE (Institute for Electrical and Electronic Engineers), Fellow of RSNZ (Royal Society of New Zealand), Fellow of the College of Fellows of INNS (International Neural Network Society), Distinguished Visiting Fellow of the Royal Academy of Engineering UK, Fellow of the NZ IITP (Institute for IT Professionals). He is the Founding Director of the Knowledge Engineering and Discovery Research Institute (KEDRI) and Professor of Knowledge Engineering in the School of Engineering, Computing and Mathematical Sciences at AUT.
His main interests are in the areas of: computational intelligence; neuro-computing; bioinformatics; neuroinformatics; speech and image processing; data mining; knowledge representation and knowledge discovery.
He has published over 650 works, among them 250 journal papers, 10 text books, edited research books and monographs, conference papers, book chapters, edited conference proceedings, 28 patents and authorship certificates in the area of intelligent systems, connectionist and hybrid connectionist systems, fuzzy systems, expert systems, speech recognition, bioinformatics, neurocomputing and neural networks. Recently he invented the first neuromorphic spatio-temporal data machine called NeuCube currently used in the labs of 25 countries .
He is a Fellow of the Royal Society of New Zealand, Fellow of the Institute of Electrical and Electronic Engineers (IEEE), Fellow of the Institute for IT Professionals NZ, Distinguished Visiting Fellow of the Royal Academy of Engineering, UK.
He has appoints in other universities, such: Honorary and Visiting Professorships at Teesside University UK, Shanghai Jiao Tong University and ETH/UniZurich and George Moore Professorship at Ulster University. He was awarded Doctor Honoris s Causa of Obuda University Budapest.
Professor Kasabov served as the President of the International Neural Network Society (INNS) (2009-2010), Asia-Pacific Neural Network Assembly (APNNA) (2008), Asia Pacific Neural Networks Society (2019) (APNNS).
Professor Kasabov is the General Chairman of a series of biannual international conferences on Neurocomputing and Evolving Intelligence in New Zealand. He received numerous awards, including: EU Marie Curie Fellowship (2011-2012); the INNS Ada Loveloc Meritorious Aaward (2019); the INNS Gabor Award (2012); The Bayer Science Innovator Award (2007); The Royal Society of New Zealand Silver Medal (2001); the AUT Medal (2105).
Professor Kasabov is Associate Editor of numerous international journals, including Neural Networks. He has extensive academic experience at various academic and research organisations: University of Otago, New Zealand; University of Essex, UK; University of Trento, Italy; Technical University of Sofia, Bulgaria; TU Kaiserslautern Germany; ETH Zurich; Shanghai Jiao Tong University; Ulster University UK. Professor Kasabov has Masters degrees in Computer Science and Engineering, and a PhD in Mathematical Sciences from Technical University, Sofia, Bulgaria. He has supervised to completion 50 PhD students.
CRSS: Center for Robust Speech Systems
Erik Jonsson School of Engineering & Computer Science
The University of Texas at Dallas,
United States of America.
John H.L. Hansen, received Ph.D. & M.S. degrees from Georgia Institute of Technology, and B.S.E.E. degree from Rutgers Univ. He joined Univ. of Texas at Dallas (UTDallas) in 2005, where he is Associate Dean for Research, Professor of Electrical & Computer Engineering, Distinguished Univ. Chair in Telecommunications Engineering, and holds a joint appointment in School of Behavioral & Brain Sciences (Speech & Hearing). At UTDallas, he established Center for Robust Speech Systems (CRSS). He is an ISCA Fellow, IEEE Fellow, past Member and TC-Chair of IEEE Signal Proc. Society, Speech & Language Proc. Tech. Comm.(SLTC), and Technical Advisor to U.S. Delegate for NATO (IST/TG-01). He currently serves as ISCA President. He has supervised 92 PhD/MS thesis candidates, was recipient of 2020 UT-Dallas Provost’s Award for Grad. Research Mentoring, 2005 Univ. Colorado Teacher Recognition Award, and author/co-author of +750 journal/conference papers in the field of speech/language/hearing processing & technology. His has over 20,000 citations. He served as General Chair for Interspeech-2002, Co-Organizer and Tech. Chair for IEEE ICASSP-2010, and Co-General Chair and Organizer for IEEE Workshop on Spoken Language Technology (SLT-2014) (Lake Tahoe, NV). He is serving as Co-Chair for ISCA INTERSPEECH-2022, and Tech. Chair for IEEE ICASSP-2024.
Head of Research
Intelligent Voice Ltd
Title: Advances in speech and language technology: an industry 4.0 perspective
Speech and natural language technology have advanced at a rapid pace in recent years. This advance, a facet of the industry 4.0 era, has been driven in part by GPGPU hardware and the deep learning frameworks that use them, and by the adoption of open-source software by the academic and commercial AI community alike. The spirit of cooperation among researchers in the academic and commercial worlds has resulted in claims of human parity in speech recognition models, and the emergence of numerous architectures based on decision trees, DNNs, CNNs, RNNs and Transformers, to mention but a few. These developments have markedly impacted the way in which humans communicate with computers, and are currently driving numerous commercial products that rely on speech, natural language processing and natural language understanding, loosely termed Conversational AI. This talk will present two real world case studies in the medical and insurance domains that exploit speech and language to augment the ability of human operators to do their jobs more efficiently. These use cases, taken from the presenter’s experience working in the speech and natural language processing commercial world, represent an informative snapshot of the possibilities that speech and natural language processing advances are bringing to Industry 4.0 applications.
Cornelius Glackin graduated from the Ulster University, School of Computing & Intelligent Systems with an MSc in Computing & Intelligent Systems in 2004. Cornelius completed a PhD concerning Spiking Neural Network research at Ulster University in 2009. After six years post-doctoral research experience working at the University of Ulster and the University of Hertfordshire, he then moved to industry. Cornelius is an experienced data scientist with over 15 years research and development experience. He published his first paper in neural network research in 2005 and has gone on to publish over 50 papers in the machine and deep learning fields. Cornelius is Head of Research for Intelligent Voice, where he and his team are engaged in research into Acoustic and Language Model Development, Speech Enhancement, Diarization, Natural Language Processing, Neural Machine Translation, GPU Parallelisation, and Privacy Preserving Computation. In addition to this Cornelius works as a consulting data scientist on behalf of the company.
Professor,Institute for Advanced Co-Creation Studies
Osaka University, Japan
Concurrent affiliation: The Institute of Scientific and Industrial Research (ISIR))
Title: Video-based Gait Analysis and Its Applications
Gait is considered as one of behavioral biometric modalities which is available even at a distance from a camera without subject cooperation. We can perceive a variety of information from gait: identity, age, gender, emotion, situation, health status, aesthetic attributes (e.g., beautiful, graceful, and imposing). Of these, human perception-based aesthetic attributes are important, because people who pay attention to their fashion style and body shape, may also pay attention to their gaits, i.e., whether their gaits look nice or not. In this talk, I’ll give a brief overview of our recent progresses of video-based gait analysis at the beginning. I then introduce three works on the human perception-based gait aesthetic attribute estimation. All of the methods rely on relative attribute frameworks with relative annotation for paired data (e.g., the 1st one is better, neutral, and the 2nd one is better). While training is processed with paired data with Siamese-type (i.e., two identical streams) deep neural networks, a inference is processed with a single data through one of the stream, i.e., we never need to feed paired data in a test stage. Qualitative and quantitative results are shown with our own constructed gait databases and annotations.
Yasushi Makihara received the B.S., M.S., and Ph.D. degrees in Engineering from Osaka University in 2001, 2002, and 2005, respectively. He was appointed as a specially appointed assistant professor (full-time), an assistant professor, and an associate professor at The Institute of Scientific and Industrial Research, Osaka University, in 2005, 2006, and 2014, respectively. He is currently a professor of the Institute for Advanced Co-Creation Studies, Osaka University. His research interests are computer vision, pattern recognition, and image processing including gait recognition, pedestrian detection, morphing, and temporal super resolution. He is a member of IPSJ, IEICE, RSJ, and JSME. He has obtained several honors and awards, including the 2nd Int. Workshop on Biometrics and Forensics (IWBF 2014), IAPR Best Paper Award, the 9th IAPR Int. Conf. on Biometrics (ICB 2016), Honorable Mention Paper Award, the 28th British Machine Vision Conf. (BMVC 2017), Outstanding Reviewers, the 11th IEEE Int. Conf. on Automatic Face and Gesture Recognition (FG 2015), Outstanding Reviewers, the 30th IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2017), Outstanding Reviewers, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology, Prizes for Science and Technology, Research Category in 2014. He has served as an associate editor in chief of IEICE Trans. on Information and Systems, an associate editor of IPSJ Transactions on Computer Vision and Applications (CVA), a program co-chair of the 4th Asian Conf. on Pattern Recognition (ACPR 2017), area chairs of ICCV 2019, CVPR 2020, ECCV 2020, and reviewers of journals such as T-PAMI, T-IP, T-CSVT, T-IFS, IJCV, Pattern Recognition, and international conferences such as CVPR, ICCV, ECCV, ACCV, ICPR, FG, etc.
Associate Professor in Computer Engineering
Massey University, New Zealand
Title: Indoor Positioning System: GPS for Smart Homes and Smart Buildings Leveraging Machine Learning and Internet of Things
Location based services and ambient assisted living are key facilities of smart cities and smart homes. Implementation of such services require a functional positioning system. Outdoor positioning systems like GPS do not work reliably inside buildings. Researchers have been working hard for the past two decades to develop robust, affordable Indoor Positioning System (IPS). Traditional research on IPS is fractured with siloed approach, concentrating on single sensing modality. Robust, functioning IPS can only be realised if it is treated as a multidisciplinary, multisensory problem. The rapid adoption of Internet of Things (IoT) is providing opportunity for implementing IPS by repurposing the pre-existing infrastructure of networked devices and ambient sensors in modern buildings. Machine Learning (ML) techniques present an opportunity for data-driven approach. However, ML requires large training corpus incurring substantial cost of human time and labour. This talk introduces the audience to the Indoor Positioning System and covers.
Associate Professor Fakhrul Alam is the Department Leader of Mechanical & Electrical Engineering, Massey University, New Zealand. He also holds the position of Adjunct Professor with the School of Engineering and Technology of Sunway University, Malaysia for 2021-22. He received BSc (Hons) in Electrical & Electronic Engineering from BUET, Bangladesh, and MS and Ph.D. in Electrical Engineering from Virginia Tech, USA. His work involves the development of Intelligent Systems, Smart Sensors and Precision Instrumentation by leveraging his expertise in Wireless and Visible Light Communication, IoT and Signal Processing. His work has been sponsored, among others, by the New Zealand Ministry of Business, Innovation and Employment (MBIE), Auckland Transport and New Zealand Forest Research Institute Limited. A/Professor Alam a Senior Member of the Institute of Electrical & Electronics Engineers (IEEE), a member of the Institution of Engineering & Technology (IET) and the Association for Computing Machinery (ACM). He is an Associate Editor of IEEE Access, Topic Editor of Sensors (MDPI) and sits on the IEEE Conference Technical Program Integrity Committee (TPIC). He is also the only engineering academic at Massey University to have been elected as the “Lecturer of the Year”.