Skip to main content

架构

常规架构

¥General architecture

Yarn 通过核心包(发布为 @yarnpkg/core)工作,该包公开了组成项目的各种基本组件。一些组件是你可能从 API 中识别出的类:ConfigurationProjectWorkspaceCacheManifest 等。所有这些都由核心包提供。

¥Yarn works through a core package (published as @yarnpkg/core) that exposes the various base components that make up a project. Some of the components are classes that you might recognize from the API: Configuration, Project, Workspace, Cache, Manifest, and others. All those are provided by the core package.

核心本身没有做太多事情 - 它仅包含管理项目所需的逻辑。为了从命令行使用此逻辑,Yarn 提供了一个名为 @yarnpkg/cli 的间接寻址,有趣的是,它也没有做太多事情。但是,它有两个非常重要的职责:它根据当前目录(cwd)补充项目实例,并将预构建的 Yarn 插件注入环境。

¥The core itself doesn't do much - it merely contains the logic required to manage a project. In order to use this logic from the command-line Yarn provides an indirection called @yarnpkg/cli which, interestingly, doesn't do much either. It however has two very important responsibilities: it hydrates a project instance based on the current directory (cwd), and inject the prebuilt Yarn plugins into the environment.

瞧,Yarn 是以模块化方式构建的,允许将与第三方交互相关的大多数业务逻辑外部化到其自己的包中 - 例如 npm 解析器 只是众多插件中的一个。这种设计为我们提供了一个更简单的代码库(因此提高了开发速度和产品稳定性),并为插件作者提供了编写自己的外部逻辑的能力,而无需修改 Yarn 代码库本身。

¥See, Yarn is built in modular way that allow most of the business logic related to third-party interactions to be externalized inside their own package - for example the npm resolver is but one plugin amongst many others. This design gives us a much simpler codebase to work with (hence an increased development speed and stabler product), and offers plugin authors the ability to write their own external logic without having to modify the Yarn codebase itself.

安装架构

¥Install architecture

运行 yarn install 时发生的情况可以总结为几个不同的步骤:

¥What happens when running yarn install can be summarized in a few different steps:

  1. 首先我们输入 "解决步骤":

    ¥First we enter the "resolution step":

    • 首先我们加载存储在锁文件中的条目,然后根据这些数据和项目的当前状态(它通过读取清单文件,即 package.json 来确定),核心运行内部算法来找出缺少哪些条目。

      ¥First we load the entries stored within the lockfile, then based on those data and the current state of the project (that it figures out by reading the manifest files, aka package.json) the core runs an internal algorithm to find out which entries are missing.

    • 对于每个缺失的条目,它使用 Resolver 接口查询插件,并询问它们是否知道与给定描述符 (supportsDescriptor) 及其确切标识 (getCandidates) 和传递依赖列表 (resolve) 匹配的包。

      ¥For each of those missing entries, it queries the plugins using the Resolver interface, and asks them whether they would know about a package that would match the given descriptor (supportsDescriptor) and its exact identity (getCandidates) and transitive dependency list (resolve).

    • 一旦获得新的软件包元数据列表,核心就会开始对新添加的软件包的传递依赖进行新的解析。这将重复进行,直到它确定依赖树中的所有包现在都将其元数据存储在锁定文件中。

      ¥Once it has obtained a new list of package metadata, the core starts a new resolution pass on the transitive dependencies of the newly added packages. This will be repeated until it figures out that all packages from the dependency tree now have their metadata stored within the lockfile.

    • 最后,一旦依赖树中的每个包范围都被解析为元数据,核心就会最后一次在内存中构建树,以生成我们所说的 "虚拟包"。简而言之,这些虚拟包是同一基础包的拆分实例 - 我们使用它们来消除列出对等依赖的所有包的歧义,这些包的依赖集将根据其在依赖树中的位置而变化(有关更多信息,请参阅 此词典条目)。

      ¥Finally, once every package range from the dependency tree has been resolved into metadata, the core builds the tree in memory one last time in order to generate what we call "virtual packages". In short, those virtual packages are split instances of the same base package - we use them to disambiguate all packages that list peer dependencies, whose dependency set would change depending on their location in the dependency tree (consult this lexicon entry for more information).

  2. 解析完成后,我们进入 "获取步骤":

    ¥Once the resolution is done, we enter the "fetch step":

    • 现在我们有了组成依赖树的确切包集,我们对其进行迭代,并针对每个包向缓存发起新请求,以了解是否可以在任何地方找到该包。如果不是,我们会像上一步一样,询问我们的插件(通过 Fetcher 接口)是否知道该包(supports),如果是,则从其远程位置(fetch)检索它。

      ¥Now that we have the exact set of packages that make up our dependency tree, we iterate over it and for each of them we start a new request to the cache to know whether the package is anywhere to be found. If it isn't we do just like we did in the previous step and we ask our plugins (through the Fetcher interface) whether they know about the package (supports) and if so to retrieve it from whatever its remote location is (fetch).

    • 关于获取器的有趣信息:它们通过 fs 上的抽象层与核心进行通信。我们这样做是为了让我们的包可以来自许多不同的来源 - 它可以来自从注册表下载的软件包的 zip 存档,也可以来自磁盘上 portal: 依赖的实际目录。

      ¥Interesting tidbit regarding the fetchers: they communicate with the core through an abstraction layer over fs. We do this so that our packages can come from many different sources - it can be from a zip archive for packages downloaded from a registry, or from an actual directory on the disk for portal: dependencies.

  3. 最后,一旦所有软件包都准备好使用,就会出现 "链接步骤":

    ¥And finally, once all the packages are ready for consumption, comes the "link step":

    • 为了正常工作,你使用的包必须以某种方式安装在磁盘上。例如,在原生 Node 应用的情况下,你的包必须安装到一组 node_modules 目录中,以便解释器可以找到它们。这就是链接器的作用。通过 LinkerInstaller 接口,Yarn 核心将与已注册的插件进行通信,让它们了解依赖树中列出的包,并描述它们的关系(例如,它会告诉它们 tapablewebpack 的依赖)。然后,插件可以决定以他们认为合适的任何方式处理这些信息。

      ¥In order to work properly, the packages you use must be installed on the disk in some way. For example, in the case of a native Node application, your packages would have to be installed into a set of node_modules directories so that they could be located by the interpreter. That's what the linker is about. Through the Linker and Installer interfaces the Yarn core will communicate with the registered plugins to let them know about the packages listed in the dependency tree, and describe their relationships (for example it would tell them that tapable is a dependency of webpack). The plugins can then decide what to do of this information in whatever way they see fit.

    • 这样做意味着可以非常轻松地为其他编程语言创建新的链接器 - 你只需要编写自己的逻辑,说明 Yarn 提供的包应该发生什么。想要生成 __autoload.php 吗?做吧!想要设置 Python 虚拟环境吗?没问题!

      ¥Doing this means that new linkers can be created for other programming languages pretty easily - you just need to write your own logic regarding what should happen from the packages provided by Yarn. Want to generate an __autoload.php? Do it! Want to setup a Python virtual env? No problem!

    • 还有一件很酷的事情是,依赖树中的包不必全部是同一类型。我们的插件设计允许同时实例化多个链接器。甚至更好 - 包可以跨链接器相互依赖!你可以有一个依赖于 Python 包的 JavaScript 包(例如,从技术上讲,node-gyp 就是这种情况)。

      ¥Something else that's pretty cool is that the packages from within the dependency tree don't have to all be of the same type. Our plugin design allows instantiating multiple linkers simultaneously. Even better - the packages can depend on one another across linkers! You could have a JavaScript package depending on a Python package (which is technically the case of node-gyp, for example).