Contactless Vital Signs Estimation & AI Based Signal Processing
Develop and optimize signal processing algorithms for contactless human vital signs sensing, anomaly detection, and human tracking
Programming Languages & Compilers
Research and design/optimize the algorithm for static analysis. Translate the dynamic-type language to static-type programming language automatically. We welcome experienced developers, as well as students - winners and prize-winners of Olympiads of informatics, mathematics, and physics (not lower than All-Russian School Olympiad). Remark about the internship opportunity: We consider only candidates who are ready for internship not less than 6 months, 32 hours a week.
We solve mathematical and algorithmic problems that arise while developing and operating wired networks, supercomputers and cloud calculations. Among those tasks are: graph clustering with further routing; using AI for routing; routing that meets several independent constraints (for quality of service); flow and server load balancing; traffic prediction. Tasks mentioned above make use of a lot of beautiful mathematics and algorithms from graph theory, discrete optimization and probability theory. Projects start from literature review, then we build an adequate mathematical model, and look for the ways to solve the problem within the model. Then we turn our solution into code and check the quality of the solution on test cases. We welcome PhD students or PhDs, as well as students - winners and prize-winners of Olympiads of mathematics, informatics and physics (not lower than All-Russian School Olympiad). Remark about the internship opportunity: We consider only candidates who are ready for internship not less than 9 months, 32 hours a week.
Big Data/Clickhouse Architecture
Act as the owner of the business sub-domain of the team, and be responsible for roadmap planning and requirement clarification. Independently design and develop big data distributed computing engine features. Explore and identify effective means to optimize distributed computing, storage, and communication.
Big Data Graph Algorithms Research & Development
The Huawei Moscow Computing team is looking for graph algorithms developers to drive the research, design, analysis and optimization of an in-house large-scale graph-processing framework. We have positions of different levels — junior developers, individual contributors, technical leaders, and architects. Senior-level candidates will have a chance to influence the product development and roadmap.
PROJECT DESCRIPTION The project is to develop a general-purpose library of parallel and distributed graph analysis algorithms to support billion- and trillion-scale graph processing on modern computing platforms. We use several approaches for large-scale graph processing, including representation of graph algorithms in terms of matrix/vector operations (GraphBLAS), Apache Spark-based distributed processing, incremental batch processing, and dynamic (time-evolving) graph processing. Currently, our libraries support a wide range of classical graph analysis algorithms (e.g., connected components, shortest paths, betweenness centrality, PageRank, subgraph matching, Node2Vector), and we plan to extend it with more types of algorithms. Our goal is to compete with open-source and industry leading solutions in terms of performance and data scale. In addition, we conduct long-term research of distributed-memory graph algorithms and aim to develop a novel large-scale graph-processing framework.
The team is based on algorithm developers, mathematicians, and software engineers with mathematical background; most team members have Ph.D. degree.
Video Compression Algorithms
Study, design and development of machine learning based compression algorithms for video/image content. Designing network architecture and optimizing its performance based video compression criteria. Preparation of patent applications. Contributing to next generation international video/image coding standards.
3D Compression Algorithms
Exploration and development of novel compression algorithms for 3D contents (3D mesh, 3D point cloud, texture), including traditional and AI-based approaches. Design and development of new transmission system for 3D content. Participating international standardization activity.
Development of adaptive media streaming protocols for 3D volumetric video application (video on demand and/or live broadcasting for ARVR, digital human, etc). Development of working prototypes for further product line deliver. Participate the relevant international standard meeting, like MPEG (Moving Picture Experts Group).
Big Data ML Research & Algorithm Development
The Huawei Moscow Computing team is looking for ML algorithms developers to drive the research, design, analysis and optimization of an in-house ML algorithms framework. We have positions of different levels — junior developers, individual contributors, technical leaders, and architects. Senior-level candidates will have a chance to influence the product development and roadmap.
PROJECT DESCRIPTION The project is to develop a general-purpose library of ML algorithms for Apache Spark distributed computing platform and compete with industry-level data analysis solutions, such as Spark ML/MLLib, Intel oneDAL, etc. Currently, our solution supports a wide range of classical ML algorithms (e.g., linear regression, SVM, kNN, random forest, dimensionality reduction, gradient boosting), and we plan to extend it with more types of algorithms. The team conducts research, development, idea verification, implementation, and performance optimization of new ML methods and algorithms. We also cooperate with academic partners when deep theoretical research is required. The team is based on algorithm developers, mathematicians, and software engineers with mathematical background; most team members have Ph.D. degree.
Welcome to join us and grow together!
Fuzzing, Software test
Software testing and fuzzing automation across different cloud-native applications. Development and application of fuzzing tools. Linux trustworthiness model and internals, C/C++ programming languages, building systems.
Data Compression Algorithms
Lossless data compression algorithms. Math (statistics, Information Theory, Math Modeling). Typical business data is the main subject for compression: e.g. various databases types content, Virtual Server and Desktop Infrastructures. Compression of specific data types is also possible: genome data, satellite images, 5G logs data etc.
Application of program analysis techniques in product development. Used technologies: OOD, OOP, design patterns, TDD, C and C++, Python, Windows, Linux, FreeBSD, Mac OS, GCC toolchain, LLVM toolchain, CMake, git, Jenkins, Jira, etc.
Video Compression Algorithm
Video recompression gain with visual quality saving for mobile platform. Coding in C/C++, Python, Java etc. Video & image codec standards and H.265, H.264, JPEG etc. Video & image compression algorithm design.
Data Management Algorithms
Design and development of simulation framework and application of researched workloads prediction, data management algorithms. Algorithms and data structures domain.
Data storage systems, virtualization technologies, networking and complex software integration, storage under VMWare ecosystem.
Big data processing related technology, big data collection, processing, store, analysis and other processes, mainstream big data analysis technology.
NAS Data Service
NAS storage product solution design and development. Key technology planning and layout for NAS and Unstructured data service in HPC, HPDA, AI, bigdata and data lake scenarios, and long-term competitiveness of products. Skripting languages: Python, Perl, and or Ruby.
Data protection (backup and archive) and copy data utilization for mainstream workload ecosystems, such as databases including NOSQL (MongoDB,KVDB, GraphDB,etc） and distributed DB, NAS, files, virtualization (container), big data, and other secondary storage scenarios.
Cloud gaming – reduce latency of audio/video data transferring between user and cloud. GPU optimization.
Cloud gaming provides possibility to play using the capacity of remote server. The game image will be compressed and then transmitted to user display over the network. On the client, the gamer's device doesn't require any high-end processors or graphics cards. We are trying to provide high quality of the game image, low latency, installation-free and zero-waiting software. To develop this technology we need to find out the best solution for audio/video encoding, network transmission optimization and GPU virtualization. Another direction of our project is Metaverse research work. We want to build our own metaverse with augmented and virtual reality.
Support and development of a framework for formal verification of OS code using functional language OCaml and proof assistant Coq.
Cloud Technologies Research
Research and development of cloud infrastructure components such as OpenStack, Kubernetes and other related CNCF and OpenInfra projects. Developing new breakthrough technologies for the 5G Cloud solutions and works on improving existing open-source and proprietary projects (with primary focus on high performance and extra-large scale support). Actively participate in corresponding opensource communities.
Big Data processing
Research, development, and optimization of Distributed data processing engines for the next generation of Huawei products and Big Data platform by bringing innovation architecture and technologies in distributed data processing and algorithms for better performance, scaling and locality.
Network Protocol Stack
Implementation of new software for the network stack, integration of new algorithms
Computer Vision with Deep Learning
We aim at carrying state-of-the-art machine learning research in computer vision, video data processing, self-driving vehicle (SDV) and creating optimized neural algorithms for Huawei Ascend platform.
Key Responsibilities include: • Analyze industry/academia trends and identify the competitiveness of the Video intelligence technology market and Huawei’s market offerings • Become a leading researcher in the R&D team, define the roadmap of research targets, bridge the gap between research achievements and industrial products • Be responsible for research projects within the Video Intelligence portfolio, completing core technology construction in the Video Intelligence scope • Build cooperation with European universities, research institutes and industrial partners • Represent Huawei within the broader industry, attending and participating at international meetings and conferences
Requirements: Strong research background and relative industrial experience • Have experience in Video data extraction and analytics technologies. Object detection and tracking, classification, Data Augmentation, abnormal event detection, multi-modality fusion is preferred • Experience in any module of SDV stack is highly preferred (perception, prediction, planning) • Be familiar with deep learning architecture and open source frameworks. Domain adaptation, transfer learning, zero, one or few-shot learning, network compression • Experience in fast and efficient hardware accelerated approximate nearest neighbor system for data retrieval project • Experience in the methods of acceleration of Neural Networks both for training (e.g. improved optimization algorithms) and inference (e.g. pruning, tensor decompositions) on the hardware • Masters or higher degree in computer vision, image processing, deep learning/machine learning or significant relevant industry experience • Strong mathematical education in probability theory, linear algebra, statistics and relevant Math fields • Good knowledge of python and mathematical libraries. Knowledge of C++ is a big plus • PhD in a relevant discipline is preferred • 3+ years of industry experience in (one or multiple areas) Computer Vision, Machine Learning/Deep Learning, and Data Science as well as experience in applying academic studies to industrial applications is preferred • 3+ years of experience in Video data analytics and processing is preferred
Graph Processing Software/Algorithm Research
The group conduct research and development on cutting edge graph computing software and algorithm technologies, including but not limited to graph computing and learning framework, graph processing system optimization and acceleration, graph representation learning algorithms (e.g. graph embedding and graph neural networks).
Neuromorphic and Neuro-inspired Computing Algorithm/Fabric Research
The group conduct research and development on cutting edge Neuro-inspired Computing Algorithms/Materials, including but not limited to brain-mimetic algorithms design, neuromorphic computing elements, spiking neural networks, their training and optimal topology, distributed intelligent agents, knowledge representation and learning algorithms, algorithm optimization and acceleration, multi-modal learning.
Theoretical research and development of advanced information and communications systems (mathematics, physics, computer science)
The team dedicated to the research and development of basic communications theories, supporting future product development with advanced theories, resolving pain points of academia's concern, searching for possible solutions, and exploring possible technological innovation fields. Our research is not limited to communications products, but may focus on a wide range of academic issues, such as artificial intelligence, computer technology, quantum information theory, and semiconductor technology.
AI System Engineering (C++)
The group develop machine learning systems and tools for various mobile and edge devices. The main project is a cross-platform library for on-device neural networks training which we develop since the 2019 year. There are many engineering work along with research tasks here.
Image Processing/Computer Vision Research
The group pursues Research & Development in the areas of Machine Learning with particular focus on Deep Learning, Computer Vision, Image and Video restoration and optimization. As member of the team, you will work on some of the most challenging and ambitious technical problems in computational imaging, develop new DL solutions for image and video applications.
Speech and Natural Language Processing Research
The group pursues Research & Development in the areas of Machine Learning with particular focus on Deep Learning, Text-to-Speech and Speech-to-Text, Natural Language Understanding. As member of the team, you will work on some of the most challenging and ambitious technical problems in natural language processing, searching/ranking algorithms, speech synthesis, acoustic feature analysis.
Research of ML/AI approaches for CPU performance & power optimization
ML analysis of CPU work and research for various dependencies across metrics, application clusterization, development of design space exploration tools
Development of SW for CPU performance & power optimization
Development of various SW for CPU optimization: compiler, binary translator, functional and performance CPU models, DBI tools.
Image processing algorithms
Developer of image processing algorithms Job responsibilities - Development of image processing algorithms - Preparation and processing of data: real or 3D-rendered - Presenting and preparing documentation - Preparation of patens and publications - Teamwork, helping the less experienced team members - For more experienced - project leadership
Development of the compiler and environment for the execution of the Golang programming language. Active participation in the open source language community (golang.org).
Augmented Reality algorithms
AR/VR algorithm developer Job responsibilities - Development of image processing algorithms - Preparation and processing of data: real or 3D-rendered - Presenting and preparing documentation - Preparation of patens and publications - Teamwork, helping the less experienced team members - For more experienced - project leadership
Responsible for Huawei Cloud hardware data-driven performance optimization.
Big Data analysis
Responsible for Cloud hardware performance data mining, finding system bottlenecks, creating metrics importance ranking based on data analysis.
Responsible for Linux kernel level components/software development. Good understanding of System Performance Counter Unit.
Big Data Analysis & Data Mining
Responsible for Cloud hardware performance data mining, finding system bottlenecks, creating metrics importance ranking based on data analysis.
"Be responsible for static analysis, UNIT testing, dynamic code analysis, intelligent diagnostic analysis, bug fixing, and other code analysis services (such as code search, code synchronization, library auto-update) for the next-generation R&D process. Participate in the R&D in design, application architecture, creation of applications for a technological breakthrough. Build utilities for working with Big Data."
Work on world-class developer tools and services to support the growth of Huawei's cloud and computing business. Research and study the world's latest AI technology to improve models. In our work, we use knowledge in code analysis, optimizations, compilers and machine learning technologies.
AI Video Codec Engineer
Research and optimization of algorithms for working with Huawei Cloud services media; design, training and optimization of algorithms and models for video codec, lossless compression and video quality assessment; integration of algorithms/models into cloud service products.
The ability to independently solve the design and development problems of the engine for distributed computing in the field of big data. The ability to find the most efficient ways to optimize distributed computing, data storage and communication between computers. Ability to lead the business unit within the team and be responsible for planning and shaping requirements. The preference is given to applicants with Candidates of science (PhD) degree, graduate students and students who plan to defend their PhD thesis this year.
Planning and development of highly efficient AI algorithms and systems, as well as joint hardware and software optimization of AI algorithms (at least in one of the areas of AI), including machine learning, deep learning, reinforcement learning, computer vision, natural language processing, optimization, recommendations/search, and much more. Deep optimization of artificial intelligence algorithms applicable in fields of smart city, transport, medicine, Internet, manufacturing, astronomy, etc. The preference is given to applicants with Candidates of science (PhD) degree, graduate students and students who plan to defend their PhD thesis this year.
Research and development in combinatorial optimization for large-scale cloud systems (VM, jobs and storage). Work in several areas: resource utilization forecasting, SLA forecasting, capacity planning. Test your ideas in a real environment and through simulations. As a researcher, you have the opportunity to work on poorly defined problems. Publish academic papers and patents (optional).
Data Compression Algorithms
Research, development and optimization of data compression algorithms and conducting cutting-edge research in the field of data compression.
Responsible for creating a high-performance distributed database. Optimization of the database system by researching new technologies in the field of database kernels. We need a candidate with the ability to design algorithms and experience in data structuring.
Development of the top-level mathematical optimization solver.
Automatic Unit Test Generation and Code Synthesis
Maximize the test coverage of automatic software generation test cases.
Reduced false positive rate (FP) of BloomFilter in LSM
In LSM, data is periodically dropped to different SST files through Compaction operations. During data reading, each SST file provides a BloomFilter to check whether the key exists in the file. It is known that when the BloomFilter returns false, it is 100% guaranteed that the current key is not in the SST file. However, when the BloomFilter returns true, it indicates that the key may or may not exist in the SST file. In this case, the key is false positive. The query performance of LSM-based KV storage database depends largely on the false positive rate. The goal is to design an algorithm that can effectively reduce the false positive rate of BloomFilter. Expectations are reduced by more than 10%, the higher the better.
"Billions of users have now data written with Facebook Zstandard (ZSTD) or Apple LZFSE compressors, which use tANS (tabled asymmetric numeral systems) entropy coder – building finite automaton optimized for a given probability distribution, which transforms symbol sequence into bit sequence. Details of this automaton are determined by symbol spread (combination): assignment of one symbol to each state. There is exponential number of such possibilities, getting various performances as mean numbers of used bits/symbol – there is basic practical problem of quickly finding nearly optimal symbol spread."
For database management systems, a large amount of work has been done to explore parallel processing of database operations. Special database machines have been designed to obtain increased system performance through both inter-query and intra-query parallelism. However, most of the existing relational database query optimizers only consider plans in which the execution order is modeled by a linear processing tree. An M-way join query is visualized as a sequence of 2-way joins of the form. This strategy seems adequate in uniprocessor systems.In a multiprocessor environment, however, the number of feasible join plans increases dramatically with new dimensions introduced by parallelism and parallel processing tends to give better overall system performance. In this topic, we expect that multi-way join should perform significantly better (at less 20%) than pair-wise joining of tables for a given example database.
Fast-efficient algorithms for VM performance correlation and cluster analysis based on Datacenter Hardware PMU(Performance Monitoring Unit) big-data time series
In Cloud DataCenter, VMs are deployed on the physical machine, physical machine use PMU(performance monitoring unit) to monitor workload status. different workload will have different PMU status, need a fast / accurate algorithm to find which /which set of PMUs impact the VM performance and make quantitative assessment.
Math Library optimization
Development and optimization of math library consisting of basic linear algebra functions like matrix-vector operations and linear solvers for various storage formats on Huawei Kunpeng processor (ARMv8), to reach the highest possible efficiency of the computations; Algorithms optimization technology to support the performance improvement of HPC and AI applications. Programming and mathematical challenges require skilled PhD Students and PhDs.
Compilers and Programming Languages
Maintenance and refactoring of the existing proprietary compiler code base; Development of auxiliary compiler modules; Baseline compiler debugging; Compiler test coverage improvement; Compiler performance measurement and analysis.
Wireless Power Amplifier RF Simulation
Joint optimization of power amplifiers (PA) together with digital pre-distirsion algorithm (DPD) in terms of linearity and efficiency. Design and simulation of high efficiency RF power amplifiers, including hybrid PAs like Envelope Tracking (ET) or digital Doherty. High electron mobility transistors (HEMT) TCAD simulation and optimization within PA design.
On-device AI, Auto-ML, NN Compression
Automatic hardware-friendly compression of neural networks for On-device AI
Mathematical programming solver
The Mathematical Modeling and Optimization team at the Huawei Minsk Research Center conducts fundamental and applied research in Mathematical Programming, Scheduling, Combinatorial Optimization, Large Scale Optimization, Meta-Heuristics for fundamental optimization solver which targets linear programming, (mixed) integer programming and general constraint programming problems.
Mobile 3D graphics
Optimize Android 3D stack and implementing new features for improving Android game experience
IoT & Cloud AI Research
Our team conducts research for IoT & Cloud technologies based on AI. The range of tasks is ranging from the detection of anomalies in IoT & Cloud interaction, classification and clusterization of behavior features, detection and filtering of unwanted content and activities.
Electromagnetic simulation for 5G antenna array
Development of fast and efficient algorithms for the numerical solution of partial differential and integral equations (Maxwell’s equations) on complex electromagnetic simulation for 5G antenna array. Research directions: integral equations (IE) and finite element method (FEM), meshing algorithms, matrix compression/approximation methods, parallel matrix solvers (direct and iterative )
Exploration of LiDAR key technologies, development of innovative LiDAR system solutions and improvement of future competitiveness of LiDAR products. Development and evolution of LiDAR algorithms, creating competitive LiDAR algorithm architecture and solutions. Building LiDAR algorithm models to complete the system simulation, analyze the theoretical performance, propose hardware specifications.
1. Research and development of state-of-the-art deep learning models. 2. Neural network structure and inference optimization. 3. Product development for mobile phones.
Compute Architecture for Neural Networks
Research of next generation computing acceleration technologies. Design and development of AI computing framework, algorithms and kernels in lower precision data types, and performance optimization. Close collaboration with other teams, including hardware engineers.
Optimized mathematical libraries
Design and development of mathematical libraries optimized for HiSilicon hardware. Collaboration with other teams (hardware, OS, compiler, etc.) to secure best possible performance.
Graphic UI SDK
Graphics UI SDK for cross platform application development. Target devices are different - Mobile, Automotive, TV, Wearable.
DevOps tools and methods
Tools and methods to help programmers delivery products with higher quality, including continuous integration, static analysis, runtime code analysis, automated testing, artifacts management, and deployment automation.
System Software Technologies
Research, development, and optimization of tools, frameworks, and runtimes empowering the next generation of Huawei products by bringing innovation technologies in System Software area and committing to OpenSource community projects.
Research of wireless / microwave / optic nonlinear algorithms
Development of innovative algorithms for next generation Wireless (5G Base Stations), Microwave (5G Backhaul) and Optical (Long haul 400G) telecommunication systems. Participation in development of system level architecture of radio and optical transmitters and receivers, estimation of feasibility, analysis of risks and difficulties. Focus on modern signal processing techniques in digital domain, correction of imperfection of analog devices (power amplifiers, up/down converters, local oscillators, passive intermodulations, Mach-Zehnder modulators, lasers and photodiodes). Research and simulation of digital signal processing algorithms, such as adaptive filtering, nonlinear predistorters and equalizers, I/Q correction, PAPR reduction, phase noise compensation, etc.
Next generation operating systems and development tools for Huawei consumer eco-system
Core operating systems algorithms, compiler design (C / C++ / Java / Kotlin / new language), virtual machines, JIT optimizations, advanced profiling and debugging, HW-SW co-optimization techniques, advanced verification frameworks
Designing and creating next generation operating systems and corresponding development and analisys tools to enable applications for multi-language multi-device distributed environments (phones, tablets, watches, TVs, cars, panels, robots, etc.)
Next generation of 3D-graphics and video technologies for mobile devices
Real-time 3D-graphics / image / video processing algorithms - AR/VR/ray tracing, game engines, OpenGL ES and Vulkan graphics API and extensions, shader languages, different implementations of codecs, super-resolution and video enchancement, CPU/GPU/NPU co-optimization methods, semantics understanding of video and images - it is enough to be an expert in one of these topics
Research and development of next generation power-efficient 3D-graphics and video algorithms and corresponding tools, support of new hardware features to bring impressive desktop level experience to mobile games and multimedia
Enhance compiler inter-procedural and profile-guided optimizations for OS and mobile native libraries
Programming language memory management benchmarking
Design benchmark to measure GC and RC overheads (Kotlin vs. Kotlin/Native) for mobile applications, focusing on performance, memory consumption, stop-the-world gaps
Lock-free memory management for thread-intensive application
Language temporary storage extensions
ML-based code completion
Auto-completion tools integrated in IDE can significantly improve productivity of the developers. Auto-completion tools can be trained on open-source repositories, detect common code patterns and suggest most likely code completion. Smart auto-completion tools can be very useful for IDE of dynamic languages
ML-guided fuzzing with libFuzzer / AFL
To implement ML-guided fuzzing technique based on open-source libFuzzer tool (part of LLVM). The thained ML model can helps to find the new bugs and increase coverage of the code.
Automatic program repair
Combination of semantic analysis based program repair and ML-driven repair for regression tests fixes. To develop scalable approach for automatic fix of regression tests using symbolic execution approach. Machine learning can be used to propose potential fix of the regression test and helps to reduce the search space.
Data redundancy elimination technologies for enterprise storage systems: deduplication and lossless data compression
Research in data redundancy elimination domain. Development of brand new algorithms for deduplication and lossless data compression or improvement of existing algorithms to achieve higher data reduction ratio. Research variable size deduplication methods for usage in primary storage. Improve LZ class compression and entropy encoding methods (Huffman encoding, Final State Entropy encoding, Arithmetic encoding, Prediction by partial matching). Explore the use of artificial intelligence for lossless data compression. Optimize new or existing algorithms for ARM hardware platform to get high performance algorithms.
Multi-lingvo text-to-speech systems, model compression, model optimization.
Mobile OS technologies:
OS/Kernel performance optimization, R&D of OS features for better User eXperience; ecosystem, API, libraries and tools for next generation of mobile applications
Improvement of accuracy and stability of color reproduction by smartphone camera
Development of algorithms of automatic white balance and color correction in smartphone camera's pipeline. Deep math analisys of color formation and processing. Both AI and non-AI techniques are being considered.
AR and VR devices and algorithm development
Development of the next generation of the augmented reality engine Huawei AREngine. Development of spatial sensors and algorithms for ARVR interaction. Development of interaction systems in virtual reality, generation photorealistic avatars using artificial intelligence systems.
Nonlinear algorithm development for 5G Wireless telecommunication systems
We are focused on development of innovative algorithms for the next generation Wireless (5G) telecommunication systems. As part of our team you will develop digital signal processing algorithms for compensation of nonlinear distortions in Wireless equipment. Research directions: digital signal processing, deep neural networks, neural architecture search, ML and optimization methods (Gradient Descent GD, Stochastic GD, Conjugate Gradients, BFGS).
Wireless communication technologies
Work on 5G algorithm optimization, new concept design (6G), data-driven solutions. Please note that we consider 4th year bachelor students and above to work on this project.
CNN-based Video Coding
Developing next generation of video coding technologies based on CNN; Designing NN structure and optimizing its performance based on video compression criteria; Promoting developed technologies in international standard organizations;
Dynamic Binary Translator
Dynamic Binary Translator enables to run apps developed for one computer architecture on CPU with another architecture. The field of development and research includes, but not limited: JIT, code optimization, virtualization.