Skip to content

[DOCS] Port release docs to master #31018

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open
754 changes: 496 additions & 258 deletions docs/articles_en/about-openvino/release-notes-openvino.rst

Large diffs are not rendered by default.

4 changes: 1 addition & 3 deletions docs/articles_en/get-started/install-openvino.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,13 @@ Install OpenVINO™ 2025.1

<script type="module" crossorigin src="../_static/selector-tool/assets/index-Codcw3jz.js"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<iframe id="selector" src="../_static/selector-tool/selector-73890fe.html" style="width: 100%; border: none" title="Download Intel® Distribution of OpenVINO™ Toolkit"></iframe>
<iframe id="selector" src="../_static/selector-tool/selector-e4b375c.html" style="width: 100%; border: none" title="Download Intel® Distribution of OpenVINO™ Toolkit"></iframe>

OpenVINO 2025.1, described here, is not a Long-Term-Support version!
All currently supported versions are:

* 2025.1 (development)
* 2024.6 (maintenance)
* 2023.3 (LTS), it will be deprecated at the end of 2025.


.. dropdown:: Effortless GenAI integration with OpenVINO GenAI

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ Additional Resources
####################

* :doc:`GPU Device <../../../openvino-workflow/running-inference/inference-devices-and-modes/gpu-device>`
* :doc:`Install Intel® Distribution of OpenVINO™ toolkit from a Docker Image <../install-openvino-archive-linux>`
* :doc:`Install Intel® Distribution of OpenVINO™ toolkit from a Docker Image <../install-openvino-docker-linux>`
* `Docker CI framework for Intel® Distribution of OpenVINO™ toolkit <https://github.com/openvinotoolkit/docker_ci/blob/master/README.md>`__
* `Get Started with DockerHub CI for Intel® Distribution of OpenVINO™ toolkit <https://github.com/openvinotoolkit/docker_ci/blob/master/get-started.md>`__
* `Dockerfiles with Intel® Distribution of OpenVINO™ toolkit <https://github.com/openvinotoolkit/docker_ci/blob/master/dockerfiles/README.md>`__
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Install OpenVINO™ Runtime from Conda Forge
Note that the Conda Forge distribution:

* offers both C/C++ and Python APIs
* does not offer support for NPU inference
* supports NPU inference on Linux only
* is dedicated to users of all major OSes: Windows, Linux, and macOS
(all x86_64 / arm64 architectures)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,12 @@ You can get started easily with pre-built and published docker images, which are
You can use the `available Dockerfiles on GitHub <https://github.com/openvinotoolkit/docker_ci/tree/master/dockerfiles>`__
or generate a Dockerfile with your settings via `DockerHub CI framework <https://github.com/openvinotoolkit/docker_ci/>`__,
which can generate a Dockerfile, build, test, and deploy an image using the Intel® Distribution of OpenVINO™ toolkit.

You can reuse available Dockerfiles, add your layer and customize the OpenVINO™ image to your needs.
The Docker CI repository includes guides on how to
`get started with docker images <https://github.com/openvinotoolkit/docker_ci/blob/master/get-started.md>`__ and how to use
`OpenVINO™ Toolkit containers with GPU accelerators. <https://github.com/openvinotoolkit/docker_ci/blob/master/docs/accelerators.md>`__
The Docker CI repository includes the following guides:

* `Get started with docker images <https://github.com/openvinotoolkit/docker_ci/blob/master/get-started.md>`__
* How to use OpenVINO™ Toolkit containers with `GPU accelerators <https://github.com/openvinotoolkit/docker_ci/blob/master/docs/accelerators.md>`__ and `NPU accelerators <https://github.com/openvinotoolkit/docker_ci/blob/master/docs/npu_accelerator.md>`__.

To start using Dockerfiles, install Docker Engine or a compatible container
engine on your system:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ Install OpenVINO™ Runtime on Linux From YUM Repository
Note that the YUM distribution:

* offers both C/C++ and Python APIs
* does not offer support for NPU inference
* is dedicated to Linux users only
* additionally includes code samples

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,33 +15,31 @@ Inference Devices and Modes
inference-devices-and-modes/query-device-properties


The OpenVINO runtime offers multiple inference modes to enable the best hardware utilization under
different conditions:
The OpenVINO™ Runtime offers several inference modes to optimize hardware usage.
You can run inference on a single device or use automated modes that manage multiple devices:

| **single-device inference**
| Define just one device responsible for the entire inference workload. It supports a range of
processors by means of the following plugins embedded in the Runtime library:
| This mode runs all inference on one selected device. The OpenVINO Runtime includes
built-in plugins that support the following devices:
| :doc:`CPU <inference-devices-and-modes/cpu-device>`
| :doc:`GPU <inference-devices-and-modes/gpu-device>`
| :doc:`NPU <inference-devices-and-modes/npu-device>`

| **automated inference modes**
| Assume certain level of automation in selecting devices for inference. They may potentially
increase your deployed solution's performance and portability. The automated modes are:
| These modes automate device selection and workload distribution, potentially increasing
performance and portability:
| :doc:`Automatic Device Selection (AUTO) <inference-devices-and-modes/auto-device-selection>`
| :doc:`Heterogeneous Execution (HETERO) <inference-devices-and-modes/hetero-execution>`
| :doc:`Automatic Batching Execution (Auto-batching) <inference-devices-and-modes/automatic-batching>`
| :doc:`Heterogeneous Execution (HETERO) <inference-devices-and-modes/hetero-execution>` across different device types
| :doc:`Automatic Batching Execution (Auto-batching) <inference-devices-and-modes/automatic-batching>`: automatically groups inference requests to improve throughput

To learn how to change the device configuration, read the :doc:`Query device properties article <inference-devices-and-modes/query-device-properties>`.
Learn how to configure devices in the :doc:`Query device properties <inference-devices-and-modes/query-device-properties>` article.

Enumerating Available Devices
#######################################

The OpenVINO Runtime API features dedicated methods of enumerating devices and their capabilities.
Note that beyond the typical "CPU" or "GPU" device names, more qualified names are used when multiple
instances of a device are available (iGPU is always GPU.0).
The output you receive may look like this (truncated to device names only, two GPUs are listed
as an example):
The OpenVINO Runtime API provides methods to list available devices and their details.
When there are multiple instances of a device, they get specific names like GPU.0 for iGPU.
Here is an example of the output with device names, including two GPUs:

.. code-block:: sh

Expand All @@ -54,9 +52,10 @@ as an example):
Device: GPU.1


You may see how to obtain this information in the :doc:`Hello Query Device Sample <../../get-started/learn-openvino/openvino-samples/hello-query-device>`.
Here is an example of a simple programmatic way to enumerate the devices and use them with the
multi-device mode:
See the :doc:`Hello Query Device Sample <../../get-started/learn-openvino/openvino-samples/hello-query-device>`
for more details.

Below is an example showing how to list available devices and use them with multi-device mode:

.. tab-set::

Expand All @@ -67,8 +66,8 @@ multi-device mode:
:language: cpp
:fragment: [part2]

With two GPU devices used in one setup, the explicit configuration would be "MULTI:GPU.1,GPU.0".
Accordingly, the code that loops over all available devices of the "GPU" type only is as follows:
If you have two GPU devices, you can specify them explicitly as “MULTI:GPU.1,GPU.0”.
Here is how to list and use all available GPU devices:

.. tab-set::

Expand Down
Loading
Loading