all messages for Guix-related lists mirrored at yhetil.org
 help / color / mirror / code / Atom feed
* [bug#49281] Add dynaconf
@ 2021-06-29 22:38 paul
  2021-06-29 22:42 ` [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger Giacomo Leidi
  2021-07-23  6:14 ` [bug#49281] " Sarah Morgensen
  0 siblings, 2 replies; 10+ messages in thread
From: paul @ 2021-06-29 22:38 UTC (permalink / raw)
  To: 49281

Hi Guixers :),

I'm sending a patch series to add dynaconf.

Thank you for your time,

Giacomo





^ permalink raw reply	[flat|nested] 10+ messages in thread

* [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger.
  2021-06-29 22:38 [bug#49281] Add dynaconf paul
@ 2021-06-29 22:42 ` Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 2/7] gnu: Add python-flake8-todo Giacomo Leidi
                     ` (5 more replies)
  2021-07-23  6:14 ` [bug#49281] " Sarah Morgensen
  1 sibling, 6 replies; 10+ messages in thread
From: Giacomo Leidi @ 2021-06-29 22:42 UTC (permalink / raw)
  To: 49281; +Cc: Giacomo Leidi

* gnu/packages/python-xyz.scm (python-flake8-debugger): New variable.
---
 gnu/packages/python-xyz.scm | 27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/gnu/packages/python-xyz.scm b/gnu/packages/python-xyz.scm
index bb28120c25..e98dc164fa 100644
--- a/gnu/packages/python-xyz.scm
+++ b/gnu/packages/python-xyz.scm
@@ -64,7 +64,7 @@
 ;;; Copyright © 2019, 2020 Alex Griffin <a@ajgrf.com>
 ;;; Copyright © 2019, 2020 Pierre Langlois <pierre.langlois@gmx.com>
 ;;; Copyright © 2019 Jacob MacDonald <jaccarmac@gmail.com>
-;;; Copyright © 2019, 2020 Giacomo Leidi <goodoldpaul@autistici.org>
+;;; Copyright © 2019, 2020, 2021 Giacomo Leidi <goodoldpaul@autistici.org>
 ;;; Copyright © 2019 Wiktor Żelazny <wzelazny@vurv.cz>
 ;;; Copyright © 2019, 2020 Tanguy Le Carrour <tanguy@bioneland.org>
 ;;; Copyright © 2019, 2021 Mădălin Ionel Patrașcu <madalinionel.patrascu@mdc-berlin.de>
@@ -9459,6 +9459,31 @@ These should be used in preference to using a backslash for line continuation.
 @end quotation")
     (license license:asl2.0)))
 
+(define-public python-flake8-debugger
+  (package
+    (name "python-flake8-debugger")
+    (version "4.0.0")
+    (source
+     (origin
+       (method url-fetch)
+       (uri (pypi-uri "flake8-debugger" version))
+       (sha256
+        (base32
+         "19pdfx0rb3k1i8hxfjdi8ndd6ayzq8g1041j8zdq256vyxvwfgg4"))))
+    (build-system python-build-system)
+    (propagated-inputs
+     `(("python-flake8" ,python-flake8)
+       ("python-pycodestyle" ,python-pycodestyle)
+       ("python-six" ,python-six)))
+    (home-page
+     "https://github.com/jbkahn/flake8-debugger")
+    (synopsis
+     "Ipdb/pdb statement checker plugin for flake8")
+    (description
+     "This package provides the @code{flake8-debugger} Python module,
+an ipdb/pdb statement checker plugin for flake8.")
+    (license license:expat)))
+
 (define-public python-flake8-implicit-str-concat
   (package
     (name "python-flake8-implicit-str-concat")
-- 
2.32.0





^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [bug#49281] [PATCH 2/7] gnu: Add python-flake8-todo.
  2021-06-29 22:42 ` [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger Giacomo Leidi
@ 2021-06-29 22:42   ` Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 3/7] gnu: Add python-dotenv Giacomo Leidi
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Giacomo Leidi @ 2021-06-29 22:42 UTC (permalink / raw)
  To: 49281; +Cc: Giacomo Leidi

* gnu/packages/python-xyz.scm (python-flake8-todo): New variable.
---
 gnu/packages/python-xyz.scm | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/gnu/packages/python-xyz.scm b/gnu/packages/python-xyz.scm
index e98dc164fa..f2d0b6d76d 100644
--- a/gnu/packages/python-xyz.scm
+++ b/gnu/packages/python-xyz.scm
@@ -9641,6 +9641,29 @@ lints.")
     (description "This package provides a Flake8 lint for quotes.")
     (license license:expat)))
 
+(define-public python-flake8-todo
+  (package
+    (name "python-flake8-todo")
+    (version "0.7")
+    (source
+     (origin
+       (method url-fetch)
+       (uri (pypi-uri "flake8-todo" version))
+       (sha256
+        (base32
+         "05arm0sch3r8248035kilmf01z0mxsahw6vpbbz0d343zy8m8k3f"))))
+    (build-system python-build-system)
+    (propagated-inputs
+     `(("python-pycodestyle" ,python-pycodestyle)))
+    (home-page
+     "https://github.com/schlamar/flake8-todo")
+    (synopsis
+     "TODO notes checker, plugin for flake8")
+    (description
+     "This package provides the @code{flake8-todo} Python module, a
+TODO notes checker plugin for flake8.")
+    (license license:expat)))
+
 (define-public python-autoflake
   (package
     (name "python-autoflake")
-- 
2.32.0





^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [bug#49281] [PATCH 3/7] gnu: Add python-dotenv.
  2021-06-29 22:42 ` [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 2/7] gnu: Add python-flake8-todo Giacomo Leidi
@ 2021-06-29 22:42   ` Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 4/7] gnu: Add python-box Giacomo Leidi
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Giacomo Leidi @ 2021-06-29 22:42 UTC (permalink / raw)
  To: 49281; +Cc: Giacomo Leidi

* gnu/packages/python-xyz.scm (python-dotenv): New variable.
---
 gnu/packages/python-xyz.scm | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/gnu/packages/python-xyz.scm b/gnu/packages/python-xyz.scm
index f2d0b6d76d..5d8bb8fc80 100644
--- a/gnu/packages/python-xyz.scm
+++ b/gnu/packages/python-xyz.scm
@@ -26087,3 +26087,30 @@ is the cythonized version of @code{fractions.Fraction}.")
      "@code{pathvalidate} is a Python library to sanitize/validate strings
 representing paths or filenames.")
     (license license:expat)))
+
+(define-public python-dotenv
+  (package
+    (name "python-dotenv")
+    (version "0.18.0")
+    (source
+     (origin
+       (method url-fetch)
+       (uri (pypi-uri "python-dotenv" version))
+       (sha256
+        (base32
+         "0b90br3f48ykx5ddfpx2zmsh4vmdqw6s812drcy9pn2q3qyarypg"))))
+    (build-system python-build-system)
+    (propagated-inputs
+     `(("python-click" ,python-click-5)))
+    (native-inputs
+     `(("python-mock" ,python-mock)
+       ("python-pytest" ,python-pytest)
+       ("python-sh" ,python-sh)))
+    (home-page
+     "https://github.com/theskumar/python-dotenv")
+    (synopsis
+     "Setup environment variables according to .env files")
+    (description
+     "This package provides the @code{python-dotenv} Python module to
+read key-value pairs from a .env file and set them as environment variables")
+    (license license:bsd-3)))
-- 
2.32.0





^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [bug#49281] [PATCH 4/7] gnu: Add python-box.
  2021-06-29 22:42 ` [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 2/7] gnu: Add python-flake8-todo Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 3/7] gnu: Add python-dotenv Giacomo Leidi
@ 2021-06-29 22:42   ` Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 5/7] gnu: python-ruamel.yaml: Update to 0.17.10 Giacomo Leidi
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Giacomo Leidi @ 2021-06-29 22:42 UTC (permalink / raw)
  To: 49281; +Cc: Giacomo Leidi

* gnu/packages/python-xyz.scm (python-box): New variable.
---
 gnu/packages/python-xyz.scm | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/gnu/packages/python-xyz.scm b/gnu/packages/python-xyz.scm
index 5d8bb8fc80..21ae2fe4f5 100644
--- a/gnu/packages/python-xyz.scm
+++ b/gnu/packages/python-xyz.scm
@@ -26114,3 +26114,27 @@ representing paths or filenames.")
      "This package provides the @code{python-dotenv} Python module to
 read key-value pairs from a .env file and set them as environment variables")
     (license license:bsd-3)))
+
+(define-public python-box
+  (package
+    (name "python-box")
+    (version "5.3.0")
+    (source
+     (origin
+       (method url-fetch)
+       (uri (pypi-uri "python-box" version))
+       (sha256
+        (base32
+         "0jhrdif57khx2hsw1q6a9x42knwcvq8ijgqyq1jmll6y6ifyzm2f"))))
+    (build-system python-build-system)
+    (propagated-inputs
+     `(("python-msgpack" ,python-msgpack)
+       ("python-ruamel.yaml" ,python-ruamel.yaml)
+       ("python-toml" ,python-toml)))
+    (home-page "https://github.com/cdgriffith/Box")
+    (synopsis
+     "Advanced Python dictionaries with dot notation access")
+    (description
+     "This package provides the @code{python-box} Python module.
+It implements advanced Python dictionaries with dot notation access.")
+    (license license:expat)))
-- 
2.32.0





^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [bug#49281] [PATCH 5/7] gnu: python-ruamel.yaml: Update to 0.17.10.
  2021-06-29 22:42 ` [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger Giacomo Leidi
                     ` (2 preceding siblings ...)
  2021-06-29 22:42   ` [bug#49281] [PATCH 4/7] gnu: Add python-box Giacomo Leidi
@ 2021-06-29 22:42   ` Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 6/7] gnu: Add python-pep8-naming Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 7/7] gnu: Add dynaconf Giacomo Leidi
  5 siblings, 0 replies; 10+ messages in thread
From: Giacomo Leidi @ 2021-06-29 22:42 UTC (permalink / raw)
  To: 49281; +Cc: Giacomo Leidi

* gnu/packages/serialization.scm (python-ruamel.yaml): Update to 0.17.10.
---
 gnu/packages/serialization.scm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/gnu/packages/serialization.scm b/gnu/packages/serialization.scm
index 8f292ae408..bd82cd2598 100644
--- a/gnu/packages/serialization.scm
+++ b/gnu/packages/serialization.scm
@@ -463,14 +463,14 @@ it is comparable to protobuf.")
 (define-public python-ruamel.yaml
   (package
     (name "python-ruamel.yaml")
-    (version "0.15.83")
+    (version "0.17.10")
     (source
      (origin
        (method url-fetch)
        (uri (pypi-uri "ruamel.yaml" version))
        (sha256
         (base32
-         "0p4i8ad28cbbbjja8b9274irkhnphhvhap3aym6yb8xfp1d72kpw"))))
+         "0rwywdbmy20qwssccydpaval2vq36825fiva374zf3vavkbchsqh"))))
     (build-system python-build-system)
     (native-inputs
      `(("python-pytest" ,python-pytest)))
-- 
2.32.0





^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [bug#49281] [PATCH 6/7] gnu: Add python-pep8-naming.
  2021-06-29 22:42 ` [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger Giacomo Leidi
                     ` (3 preceding siblings ...)
  2021-06-29 22:42   ` [bug#49281] [PATCH 5/7] gnu: python-ruamel.yaml: Update to 0.17.10 Giacomo Leidi
@ 2021-06-29 22:42   ` Giacomo Leidi
  2021-06-29 22:42   ` [bug#49281] [PATCH 7/7] gnu: Add dynaconf Giacomo Leidi
  5 siblings, 0 replies; 10+ messages in thread
From: Giacomo Leidi @ 2021-06-29 22:42 UTC (permalink / raw)
  To: 49281; +Cc: Giacomo Leidi

* gnu/packages/python-xyz.scm (python-pep8-naming): New variable.
---
 gnu/packages/python-xyz.scm | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/gnu/packages/python-xyz.scm b/gnu/packages/python-xyz.scm
index 21ae2fe4f5..c7f91dd977 100644
--- a/gnu/packages/python-xyz.scm
+++ b/gnu/packages/python-xyz.scm
@@ -9245,6 +9245,36 @@ PEP 8.")
 (define-public python2-pep8
   (package-with-python2 python-pep8))
 
+(define-public python-pep8-naming
+  (package
+    (name "python-pep8-naming")
+    (version "0.11.1")
+    (source
+     (origin
+       (method url-fetch)
+       (uri (pypi-uri "pep8-naming" version))
+       (sha256
+        (base32
+         "0937rnk3c2z1jkdmbw9hfm80p5k467q7rqhn6slfiprs4kflgpd1"))))
+    (build-system python-build-system)
+    (arguments
+     ;; Tests are broken. They work from the top
+     ;; of the master branch, so hopefully we'll
+     ;; be able to enable them in the future.
+     '(#:tests? #f))
+    (propagated-inputs
+     `(("python-flake8" ,python-flake8)
+       ("python-flake8-polyfill"
+        ,python-flake8-polyfill)))
+    (home-page
+     "https://github.com/PyCQA/pep8-naming")
+    (synopsis
+     "Check PEP-8 naming conventions")
+    (description
+     "This package provides the @code{pep8-naming} Python module, a
+plugin for flake8 to check PEP-8 naming conventions.")
+    (license license:expat)))
+
 (define-public python-pep517
   (package
     (inherit python-pep517-bootstrap)
-- 
2.32.0





^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [bug#49281] [PATCH 7/7] gnu: Add dynaconf.
  2021-06-29 22:42 ` [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger Giacomo Leidi
                     ` (4 preceding siblings ...)
  2021-06-29 22:42   ` [bug#49281] [PATCH 6/7] gnu: Add python-pep8-naming Giacomo Leidi
@ 2021-06-29 22:42   ` Giacomo Leidi
  5 siblings, 0 replies; 10+ messages in thread
From: Giacomo Leidi @ 2021-06-29 22:42 UTC (permalink / raw)
  To: 49281; +Cc: Giacomo Leidi

* gnu/packages/python-xyz.scm (python-colorama-0.4.1): New variable,
(python-dotenv-0.13.0): New variable,
(dynaconf): New variable.
 * gnu/packages/patches/dynaconf-Unvendor-dependencies.patch: New file.
* local.mk (dist_patch_DATA): Register it.
---
 gnu/local.mk                                  |     1 +
 .../dynaconf-Unvendor-dependencies.patch      | 40096 ++++++++++++++++
 gnu/packages/python-xyz.scm                   |    87 +
 3 files changed, 40184 insertions(+)
 create mode 100644 gnu/packages/patches/dynaconf-Unvendor-dependencies.patch

diff --git a/gnu/local.mk b/gnu/local.mk
index 6b9202cba1..a296d24fe4 100644
--- a/gnu/local.mk
+++ b/gnu/local.mk
@@ -967,6 +967,7 @@ dist_patch_DATA =						\
   %D%/packages/patches/dstat-skip-devices-without-io.patch	\
   %D%/packages/patches/dune-istl-2.7-fix-non-mpi-tests.patch	\
   %D%/packages/patches/dvd+rw-tools-add-include.patch 		\
+  %D%/packages/patches/dynaconf-Unvendor-dependencies.patch 		\
   %D%/packages/patches/ecl-16-format-directive-limit.patch	\
   %D%/packages/patches/ecl-16-ignore-stderr-write-error.patch	\
   %D%/packages/patches/ecl-16-libffi.patch			\
diff --git a/gnu/packages/patches/dynaconf-Unvendor-dependencies.patch b/gnu/packages/patches/dynaconf-Unvendor-dependencies.patch
new file mode 100644
index 0000000000..a7d184e662
--- /dev/null
+++ b/gnu/packages/patches/dynaconf-Unvendor-dependencies.patch
@@ -0,0 +1,40096 @@
+From 73a56e307650000b576fe28ab2afe824a186d1da Mon Sep 17 00:00:00 2001
+From: Giacomo Leidi <goodoldpaul@autistici.org>
+Date: Sat, 24 Apr 2021 22:59:37 +0200
+Subject: [PATCH] Unbundle some dependencies.
+
+Box was not unvendored because it appears to be heavily patched.
+---
+ dynaconf/cli.py                               |    4 +-
+ dynaconf/default_settings.py                  |    2 +-
+ dynaconf/loaders/env_loader.py                |    2 +-
+ dynaconf/loaders/toml_loader.py               |    2 +-
+ dynaconf/loaders/yaml_loader.py               |    2 +-
+ dynaconf/utils/parse_conf.py                  |    2 +-
+ dynaconf/vendor/box/converters.py             |    6 +-
+ dynaconf/vendor/box/from_file.py              |    6 +-
+ dynaconf/vendor/click/README.md               |    5 -
+ dynaconf/vendor/click/__init__.py             |   60 -
+ dynaconf/vendor/click/_bashcomplete.py        |  114 -
+ dynaconf/vendor/click/_compat.py              |  240 --
+ dynaconf/vendor/click/_termui_impl.py         |  262 ---
+ dynaconf/vendor/click/_textwrap.py            |   19 -
+ dynaconf/vendor/click/_unicodefun.py          |   28 -
+ dynaconf/vendor/click/_winconsole.py          |  108 -
+ dynaconf/vendor/click/core.py                 |  620 -----
+ dynaconf/vendor/click/decorators.py           |  115 -
+ dynaconf/vendor/click/exceptions.py           |   76 -
+ dynaconf/vendor/click/formatting.py           |   90 -
+ dynaconf/vendor/click/globals.py              |   14 -
+ dynaconf/vendor/click/parser.py               |  157 --
+ dynaconf/vendor/click/termui.py               |  135 --
+ dynaconf/vendor/click/testing.py              |  108 -
+ dynaconf/vendor/click/types.py                |  227 --
+ dynaconf/vendor/click/utils.py                |  119 -
+ dynaconf/vendor/dotenv/README.md              |    6 -
+ dynaconf/vendor/dotenv/__init__.py            |   18 -
+ dynaconf/vendor/dotenv/cli.py                 |   56 -
+ dynaconf/vendor/dotenv/compat.py              |   18 -
+ dynaconf/vendor/dotenv/ipython.py             |   18 -
+ dynaconf/vendor/dotenv/main.py                |  114 -
+ dynaconf/vendor/dotenv/parser.py              |   85 -
+ dynaconf/vendor/dotenv/py.typed               |    1 -
+ dynaconf/vendor/dotenv/version.py             |    1 -
+ dynaconf/vendor/ruamel/__init__.py            |    0
+ dynaconf/vendor/ruamel/yaml/CHANGES           |  957 --------
+ dynaconf/vendor/ruamel/yaml/LICENSE           |   21 -
+ dynaconf/vendor/ruamel/yaml/MANIFEST.in       |    3 -
+ dynaconf/vendor/ruamel/yaml/PKG-INFO          |  782 -------
+ dynaconf/vendor/ruamel/yaml/README.rst        |  752 ------
+ dynaconf/vendor/ruamel/yaml/__init__.py       |   10 -
+ dynaconf/vendor/ruamel/yaml/anchor.py         |    7 -
+ dynaconf/vendor/ruamel/yaml/comments.py       |  485 ----
+ dynaconf/vendor/ruamel/yaml/compat.py         |  120 -
+ dynaconf/vendor/ruamel/yaml/composer.py       |   82 -
+ .../vendor/ruamel/yaml/configobjwalker.py     |    4 -
+ dynaconf/vendor/ruamel/yaml/constructor.py    |  728 ------
+ dynaconf/vendor/ruamel/yaml/cyaml.py          |   20 -
+ dynaconf/vendor/ruamel/yaml/dumper.py         |   16 -
+ dynaconf/vendor/ruamel/yaml/emitter.py        |  678 ------
+ dynaconf/vendor/ruamel/yaml/error.py          |   90 -
+ dynaconf/vendor/ruamel/yaml/events.py         |   45 -
+ dynaconf/vendor/ruamel/yaml/loader.py         |   18 -
+ dynaconf/vendor/ruamel/yaml/main.py           |  462 ----
+ dynaconf/vendor/ruamel/yaml/nodes.py          |   32 -
+ dynaconf/vendor/ruamel/yaml/parser.py         |  216 --
+ dynaconf/vendor/ruamel/yaml/py.typed          |    0
+ dynaconf/vendor/ruamel/yaml/reader.py         |  117 -
+ dynaconf/vendor/ruamel/yaml/representer.py    |  578 -----
+ dynaconf/vendor/ruamel/yaml/resolver.py       |  160 --
+ dynaconf/vendor/ruamel/yaml/scalarbool.py     |   21 -
+ dynaconf/vendor/ruamel/yaml/scalarfloat.py    |   33 -
+ dynaconf/vendor/ruamel/yaml/scalarint.py      |   37 -
+ dynaconf/vendor/ruamel/yaml/scalarstring.py   |   59 -
+ dynaconf/vendor/ruamel/yaml/scanner.py        |  602 -----
+ dynaconf/vendor/ruamel/yaml/serializer.py     |   91 -
+ dynaconf/vendor/ruamel/yaml/setup.cfg         |    4 -
+ dynaconf/vendor/ruamel/yaml/setup.py          |  402 ----
+ dynaconf/vendor/ruamel/yaml/timestamp.py      |    8 -
+ dynaconf/vendor/ruamel/yaml/tokens.py         |   97 -
+ dynaconf/vendor/ruamel/yaml/util.py           |   69 -
+ dynaconf/vendor/toml/README.md                |    5 -
+ dynaconf/vendor/toml/__init__.py              |   16 -
+ dynaconf/vendor/toml/decoder.py               |  515 ----
+ dynaconf/vendor/toml/encoder.py               |  134 --
+ dynaconf/vendor/toml/ordered.py               |    7 -
+ dynaconf/vendor/toml/tz.py                    |   10 -
+ dynaconf/vendor/vendor.txt                    |    4 -
+ dynaconf/vendor_src/box/converters.py         |    4 +-
+ dynaconf/vendor_src/box/from_file.py          |    4 +-
+ dynaconf/vendor_src/click/README.md           |    5 -
+ dynaconf/vendor_src/click/__init__.py         |   75 -
+ dynaconf/vendor_src/click/_bashcomplete.py    |  371 ---
+ dynaconf/vendor_src/click/_compat.py          |  611 -----
+ dynaconf/vendor_src/click/_termui_impl.py     |  667 ------
+ dynaconf/vendor_src/click/_textwrap.py        |   37 -
+ dynaconf/vendor_src/click/_unicodefun.py      |   82 -
+ dynaconf/vendor_src/click/_winconsole.py      |  308 ---
+ dynaconf/vendor_src/click/core.py             | 2070 -----------------
+ dynaconf/vendor_src/click/decorators.py       |  331 ---
+ dynaconf/vendor_src/click/exceptions.py       |  233 --
+ dynaconf/vendor_src/click/formatting.py       |  279 ---
+ dynaconf/vendor_src/click/globals.py          |   47 -
+ dynaconf/vendor_src/click/parser.py           |  431 ----
+ dynaconf/vendor_src/click/termui.py           |  688 ------
+ dynaconf/vendor_src/click/testing.py          |  362 ---
+ dynaconf/vendor_src/click/types.py            |  726 ------
+ dynaconf/vendor_src/click/utils.py            |  440 ----
+ dynaconf/vendor_src/dotenv/README.md          |    6 -
+ dynaconf/vendor_src/dotenv/__init__.py        |   46 -
+ dynaconf/vendor_src/dotenv/cli.py             |  145 --
+ dynaconf/vendor_src/dotenv/compat.py          |   49 -
+ dynaconf/vendor_src/dotenv/ipython.py         |   41 -
+ dynaconf/vendor_src/dotenv/main.py            |  323 ---
+ dynaconf/vendor_src/dotenv/parser.py          |  237 --
+ dynaconf/vendor_src/dotenv/py.typed           |    1 -
+ dynaconf/vendor_src/dotenv/version.py         |    1 -
+ dynaconf/vendor_src/ruamel/__init__.py        |    0
+ dynaconf/vendor_src/ruamel/yaml/CHANGES       |  957 --------
+ dynaconf/vendor_src/ruamel/yaml/LICENSE       |   21 -
+ dynaconf/vendor_src/ruamel/yaml/MANIFEST.in   |    3 -
+ dynaconf/vendor_src/ruamel/yaml/PKG-INFO      |  782 -------
+ dynaconf/vendor_src/ruamel/yaml/README.rst    |  752 ------
+ dynaconf/vendor_src/ruamel/yaml/__init__.py   |   60 -
+ dynaconf/vendor_src/ruamel/yaml/anchor.py     |   20 -
+ dynaconf/vendor_src/ruamel/yaml/comments.py   | 1149 ---------
+ dynaconf/vendor_src/ruamel/yaml/compat.py     |  324 ---
+ dynaconf/vendor_src/ruamel/yaml/composer.py   |  238 --
+ .../vendor_src/ruamel/yaml/configobjwalker.py |   14 -
+ .../vendor_src/ruamel/yaml/constructor.py     | 1805 --------------
+ dynaconf/vendor_src/ruamel/yaml/cyaml.py      |  185 --
+ dynaconf/vendor_src/ruamel/yaml/dumper.py     |  221 --
+ dynaconf/vendor_src/ruamel/yaml/emitter.py    | 1688 --------------
+ dynaconf/vendor_src/ruamel/yaml/error.py      |  311 ---
+ dynaconf/vendor_src/ruamel/yaml/events.py     |  157 --
+ dynaconf/vendor_src/ruamel/yaml/loader.py     |   74 -
+ dynaconf/vendor_src/ruamel/yaml/main.py       | 1534 ------------
+ dynaconf/vendor_src/ruamel/yaml/nodes.py      |  131 --
+ dynaconf/vendor_src/ruamel/yaml/parser.py     |  802 -------
+ dynaconf/vendor_src/ruamel/yaml/py.typed      |    0
+ dynaconf/vendor_src/ruamel/yaml/reader.py     |  311 ---
+ .../vendor_src/ruamel/yaml/representer.py     | 1283 ----------
+ dynaconf/vendor_src/ruamel/yaml/resolver.py   |  399 ----
+ dynaconf/vendor_src/ruamel/yaml/scalarbool.py |   51 -
+ .../vendor_src/ruamel/yaml/scalarfloat.py     |  127 -
+ dynaconf/vendor_src/ruamel/yaml/scalarint.py  |  130 --
+ .../vendor_src/ruamel/yaml/scalarstring.py    |  156 --
+ dynaconf/vendor_src/ruamel/yaml/scanner.py    | 1980 ----------------
+ dynaconf/vendor_src/ruamel/yaml/serializer.py |  240 --
+ dynaconf/vendor_src/ruamel/yaml/setup.cfg     |    4 -
+ dynaconf/vendor_src/ruamel/yaml/setup.py      |  962 --------
+ dynaconf/vendor_src/ruamel/yaml/timestamp.py  |   28 -
+ dynaconf/vendor_src/ruamel/yaml/tokens.py     |  286 ---
+ dynaconf/vendor_src/ruamel/yaml/util.py       |  190 --
+ dynaconf/vendor_src/toml/README.md            |    5 -
+ dynaconf/vendor_src/toml/__init__.py          |   25 -
+ dynaconf/vendor_src/toml/decoder.py           | 1052 ---------
+ dynaconf/vendor_src/toml/encoder.py           |  304 ---
+ dynaconf/vendor_src/toml/ordered.py           |   15 -
+ dynaconf/vendor_src/toml/tz.py                |   21 -
+ dynaconf/vendor_src/vendor.txt                |    4 -
+ tests/test_cli.py                             |    2 +-
+ 153 files changed, 18 insertions(+), 38742 deletions(-)
+ delete mode 100644 dynaconf/vendor/click/README.md
+ delete mode 100644 dynaconf/vendor/click/__init__.py
+ delete mode 100644 dynaconf/vendor/click/_bashcomplete.py
+ delete mode 100644 dynaconf/vendor/click/_compat.py
+ delete mode 100644 dynaconf/vendor/click/_termui_impl.py
+ delete mode 100644 dynaconf/vendor/click/_textwrap.py
+ delete mode 100644 dynaconf/vendor/click/_unicodefun.py
+ delete mode 100644 dynaconf/vendor/click/_winconsole.py
+ delete mode 100644 dynaconf/vendor/click/core.py
+ delete mode 100644 dynaconf/vendor/click/decorators.py
+ delete mode 100644 dynaconf/vendor/click/exceptions.py
+ delete mode 100644 dynaconf/vendor/click/formatting.py
+ delete mode 100644 dynaconf/vendor/click/globals.py
+ delete mode 100644 dynaconf/vendor/click/parser.py
+ delete mode 100644 dynaconf/vendor/click/termui.py
+ delete mode 100644 dynaconf/vendor/click/testing.py
+ delete mode 100644 dynaconf/vendor/click/types.py
+ delete mode 100644 dynaconf/vendor/click/utils.py
+ delete mode 100644 dynaconf/vendor/dotenv/README.md
+ delete mode 100644 dynaconf/vendor/dotenv/__init__.py
+ delete mode 100644 dynaconf/vendor/dotenv/cli.py
+ delete mode 100644 dynaconf/vendor/dotenv/compat.py
+ delete mode 100644 dynaconf/vendor/dotenv/ipython.py
+ delete mode 100644 dynaconf/vendor/dotenv/main.py
+ delete mode 100644 dynaconf/vendor/dotenv/parser.py
+ delete mode 100644 dynaconf/vendor/dotenv/py.typed
+ delete mode 100644 dynaconf/vendor/dotenv/version.py
+ delete mode 100644 dynaconf/vendor/ruamel/__init__.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/CHANGES
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/LICENSE
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/MANIFEST.in
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/PKG-INFO
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/README.rst
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/__init__.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/anchor.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/comments.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/compat.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/composer.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/configobjwalker.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/constructor.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/cyaml.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/dumper.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/emitter.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/error.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/events.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/loader.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/main.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/nodes.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/parser.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/py.typed
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/reader.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/representer.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/resolver.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/scalarbool.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/scalarfloat.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/scalarint.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/scalarstring.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/scanner.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/serializer.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/setup.cfg
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/setup.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/timestamp.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/tokens.py
+ delete mode 100644 dynaconf/vendor/ruamel/yaml/util.py
+ delete mode 100644 dynaconf/vendor/toml/README.md
+ delete mode 100644 dynaconf/vendor/toml/__init__.py
+ delete mode 100644 dynaconf/vendor/toml/decoder.py
+ delete mode 100644 dynaconf/vendor/toml/encoder.py
+ delete mode 100644 dynaconf/vendor/toml/ordered.py
+ delete mode 100644 dynaconf/vendor/toml/tz.py
+ delete mode 100644 dynaconf/vendor_src/click/README.md
+ delete mode 100644 dynaconf/vendor_src/click/__init__.py
+ delete mode 100644 dynaconf/vendor_src/click/_bashcomplete.py
+ delete mode 100644 dynaconf/vendor_src/click/_compat.py
+ delete mode 100644 dynaconf/vendor_src/click/_termui_impl.py
+ delete mode 100644 dynaconf/vendor_src/click/_textwrap.py
+ delete mode 100644 dynaconf/vendor_src/click/_unicodefun.py
+ delete mode 100644 dynaconf/vendor_src/click/_winconsole.py
+ delete mode 100644 dynaconf/vendor_src/click/core.py
+ delete mode 100644 dynaconf/vendor_src/click/decorators.py
+ delete mode 100644 dynaconf/vendor_src/click/exceptions.py
+ delete mode 100644 dynaconf/vendor_src/click/formatting.py
+ delete mode 100644 dynaconf/vendor_src/click/globals.py
+ delete mode 100644 dynaconf/vendor_src/click/parser.py
+ delete mode 100644 dynaconf/vendor_src/click/termui.py
+ delete mode 100644 dynaconf/vendor_src/click/testing.py
+ delete mode 100644 dynaconf/vendor_src/click/types.py
+ delete mode 100644 dynaconf/vendor_src/click/utils.py
+ delete mode 100644 dynaconf/vendor_src/dotenv/README.md
+ delete mode 100644 dynaconf/vendor_src/dotenv/__init__.py
+ delete mode 100644 dynaconf/vendor_src/dotenv/cli.py
+ delete mode 100644 dynaconf/vendor_src/dotenv/compat.py
+ delete mode 100644 dynaconf/vendor_src/dotenv/ipython.py
+ delete mode 100644 dynaconf/vendor_src/dotenv/main.py
+ delete mode 100644 dynaconf/vendor_src/dotenv/parser.py
+ delete mode 100644 dynaconf/vendor_src/dotenv/py.typed
+ delete mode 100644 dynaconf/vendor_src/dotenv/version.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/__init__.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/CHANGES
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/LICENSE
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/MANIFEST.in
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/PKG-INFO
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/README.rst
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/__init__.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/anchor.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/comments.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/compat.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/composer.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/configobjwalker.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/constructor.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/cyaml.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/dumper.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/emitter.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/error.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/events.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/loader.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/main.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/nodes.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/parser.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/py.typed
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/reader.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/representer.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/resolver.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/scalarbool.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/scalarfloat.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/scalarint.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/scalarstring.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/scanner.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/serializer.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/setup.cfg
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/setup.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/timestamp.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/tokens.py
+ delete mode 100644 dynaconf/vendor_src/ruamel/yaml/util.py
+ delete mode 100644 dynaconf/vendor_src/toml/README.md
+ delete mode 100644 dynaconf/vendor_src/toml/__init__.py
+ delete mode 100644 dynaconf/vendor_src/toml/decoder.py
+ delete mode 100644 dynaconf/vendor_src/toml/encoder.py
+ delete mode 100644 dynaconf/vendor_src/toml/ordered.py
+ delete mode 100644 dynaconf/vendor_src/toml/tz.py
+
+diff --git a/dynaconf/cli.py b/dynaconf/cli.py
+index 2d45e52..7df767a 100644
+--- a/dynaconf/cli.py
++++ b/dynaconf/cli.py
+@@ -20,8 +20,8 @@ from dynaconf.utils.functional import empty
+ from dynaconf.utils.parse_conf import parse_conf_data
+ from dynaconf.validator import ValidationError
+ from dynaconf.validator import Validator
+-from dynaconf.vendor import click
+-from dynaconf.vendor import toml
++import click
++import toml
+ 
+ 
+ CWD = Path.cwd()
+diff --git a/dynaconf/default_settings.py b/dynaconf/default_settings.py
+index 66601b0..9605fc5 100644
+--- a/dynaconf/default_settings.py
++++ b/dynaconf/default_settings.py
+@@ -8,7 +8,7 @@ from dynaconf.utils import upperfy
+ from dynaconf.utils import warn_deprecations
+ from dynaconf.utils.files import find_file
+ from dynaconf.utils.parse_conf import parse_conf_data
+-from dynaconf.vendor.dotenv import load_dotenv
++from dotenv import load_dotenv
+ 
+ 
+ def try_renamed(key, value, older_key, current_key):
+diff --git a/dynaconf/loaders/env_loader.py b/dynaconf/loaders/env_loader.py
+index e7b13bd..b034c8a 100644
+--- a/dynaconf/loaders/env_loader.py
++++ b/dynaconf/loaders/env_loader.py
+@@ -2,7 +2,7 @@ from os import environ
+ 
+ from dynaconf.utils import upperfy
+ from dynaconf.utils.parse_conf import parse_conf_data
+-from dynaconf.vendor.dotenv import cli as dotenv_cli
++from dotenv import cli as dotenv_cli
+ 
+ 
+ IDENTIFIER = "env"
+diff --git a/dynaconf/loaders/toml_loader.py b/dynaconf/loaders/toml_loader.py
+index 07b973f..d81d675 100644
+--- a/dynaconf/loaders/toml_loader.py
++++ b/dynaconf/loaders/toml_loader.py
+@@ -5,7 +5,7 @@ from dynaconf import default_settings
+ from dynaconf.constants import TOML_EXTENSIONS
+ from dynaconf.loaders.base import BaseLoader
+ from dynaconf.utils import object_merge
+-from dynaconf.vendor import toml
++import toml
+ 
+ 
+ def load(obj, env=None, silent=True, key=None, filename=None):
+diff --git a/dynaconf/loaders/yaml_loader.py b/dynaconf/loaders/yaml_loader.py
+index 33c6532..3ef419a 100644
+--- a/dynaconf/loaders/yaml_loader.py
++++ b/dynaconf/loaders/yaml_loader.py
+@@ -7,7 +7,7 @@ from dynaconf.constants import YAML_EXTENSIONS
+ from dynaconf.loaders.base import BaseLoader
+ from dynaconf.utils import object_merge
+ from dynaconf.utils.parse_conf import try_to_encode
+-from dynaconf.vendor.ruamel import yaml
++from ruamel import yaml
+ 
+ # Add support for Dynaconf Lazy values to YAML dumper
+ yaml.SafeDumper.yaml_representers[
+diff --git a/dynaconf/utils/parse_conf.py b/dynaconf/utils/parse_conf.py
+index 5fc8234..6509c35 100644
+--- a/dynaconf/utils/parse_conf.py
++++ b/dynaconf/utils/parse_conf.py
+@@ -8,7 +8,7 @@ from dynaconf.utils import extract_json_objects
+ from dynaconf.utils import multi_replace
+ from dynaconf.utils import recursively_evaluate_lazy_format
+ from dynaconf.utils.boxing import DynaBox
+-from dynaconf.vendor import toml
++import toml
+ 
+ try:
+     from jinja2 import Environment
+diff --git a/dynaconf/vendor/box/converters.py b/dynaconf/vendor/box/converters.py
+index 93cdcfb..c81877a 100644
+--- a/dynaconf/vendor/box/converters.py
++++ b/dynaconf/vendor/box/converters.py
+@@ -7,9 +7,9 @@ _B='utf-8'
+ _A=None
+ import csv,json,sys,warnings
+ from pathlib import Path
+-import dynaconf.vendor.ruamel.yaml as yaml
++import ruamel.yaml as yaml
+ from dynaconf.vendor.box.exceptions import BoxError,BoxWarning
+-from dynaconf.vendor import toml
++import toml
+ BOX_PARAMETERS='default_box','default_box_attr','conversion_box','frozen_box','camel_killer_box','box_safe_prefix','box_duplicates','ordered_box','default_box_none_transform','box_dots','modify_tuples_box','box_intact_types','box_recast'
+ def _exists(filename,create=_E):
+ 	A=filename;B=Path(A)
+@@ -75,4 +75,4 @@ def _to_csv(box_list,filename,encoding=_B,errors=_C):
+ 			for G in A:D.writerow(G)
+ def _from_csv(filename,encoding=_B,errors=_C):
+ 	A=filename;_exists(A)
+-	with open(A,_G,encoding=encoding,errors=errors,newline='')as B:C=csv.DictReader(B);return[A for A in C]
+\ No newline at end of file
++	with open(A,_G,encoding=encoding,errors=errors,newline='')as B:C=csv.DictReader(B);return[A for A in C]
+diff --git a/dynaconf/vendor/box/from_file.py b/dynaconf/vendor/box/from_file.py
+index daa1137..4a2739d 100644
+--- a/dynaconf/vendor/box/from_file.py
++++ b/dynaconf/vendor/box/from_file.py
+@@ -1,8 +1,8 @@
+ from json import JSONDecodeError
+ from pathlib import Path
+ from typing import Union
+-from dynaconf.vendor.toml import TomlDecodeError
+-from dynaconf.vendor.ruamel.yaml import YAMLError
++from toml import TomlDecodeError
++from ruamel.yaml import YAMLError
+ from .exceptions import BoxError
+ from .box import Box
+ from .box_list import BoxList
+@@ -31,4 +31,4 @@ def box_from_file(file,file_type=None,encoding='utf-8',errors='strict'):
+ 	if A.suffix in('.json','.jsn'):return _to_json(B)
+ 	if A.suffix in('.yaml','.yml'):return _to_yaml(B)
+ 	if A.suffix in('.tml','.toml'):return _to_toml(B)
+-	raise BoxError(f"Could not determine file type based off extension, please provide file_type")
+\ No newline at end of file
++	raise BoxError(f"Could not determine file type based off extension, please provide file_type")
+diff --git a/dynaconf/vendor/click/README.md b/dynaconf/vendor/click/README.md
+deleted file mode 100644
+index 0f7bac3..0000000
+--- a/dynaconf/vendor/click/README.md
++++ /dev/null
+@@ -1,5 +0,0 @@
+-## python-click
+-
+-Vendored dep taken from: https://github.com/pallets/click
+-Licensed under MIT: https://github.com/pallets/clickl/blob/master/LICENSE
+-Current version: 7.1.x
+diff --git a/dynaconf/vendor/click/__init__.py b/dynaconf/vendor/click/__init__.py
+deleted file mode 100644
+index fc6520a..0000000
+--- a/dynaconf/vendor/click/__init__.py
++++ /dev/null
+@@ -1,60 +0,0 @@
+-from .core import Argument,BaseCommand,Command,CommandCollection,Context,Group,MultiCommand,Option,Parameter
+-from .decorators import argument
+-from .decorators import command
+-from .decorators import confirmation_option
+-from .decorators import group
+-from .decorators import help_option
+-from .decorators import make_pass_decorator
+-from .decorators import option
+-from .decorators import pass_context
+-from .decorators import pass_obj
+-from .decorators import password_option
+-from .decorators import version_option
+-from .exceptions import Abort
+-from .exceptions import BadArgumentUsage
+-from .exceptions import BadOptionUsage
+-from .exceptions import BadParameter
+-from .exceptions import ClickException
+-from .exceptions import FileError
+-from .exceptions import MissingParameter
+-from .exceptions import NoSuchOption
+-from .exceptions import UsageError
+-from .formatting import HelpFormatter
+-from .formatting import wrap_text
+-from .globals import get_current_context
+-from .parser import OptionParser
+-from .termui import clear
+-from .termui import confirm
+-from .termui import echo_via_pager
+-from .termui import edit
+-from .termui import get_terminal_size
+-from .termui import getchar
+-from .termui import launch
+-from .termui import pause
+-from .termui import progressbar
+-from .termui import prompt
+-from .termui import secho
+-from .termui import style
+-from .termui import unstyle
+-from .types import BOOL
+-from .types import Choice
+-from .types import DateTime
+-from .types import File
+-from .types import FLOAT
+-from .types import FloatRange
+-from .types import INT
+-from .types import IntRange
+-from .types import ParamType
+-from .types import Path
+-from .types import STRING
+-from .types import Tuple
+-from .types import UNPROCESSED
+-from .types import UUID
+-from .utils import echo
+-from .utils import format_filename
+-from .utils import get_app_dir
+-from .utils import get_binary_stream
+-from .utils import get_os_args
+-from .utils import get_text_stream
+-from .utils import open_file
+-__version__='8.0.0.dev'
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/_bashcomplete.py b/dynaconf/vendor/click/_bashcomplete.py
+deleted file mode 100644
+index e27049d..0000000
+--- a/dynaconf/vendor/click/_bashcomplete.py
++++ /dev/null
+@@ -1,114 +0,0 @@
+-_I='COMP_CWORD'
+-_H='COMP_WORDS'
+-_G='fish'
+-_F='zsh'
+-_E='bash'
+-_D='_'
+-_C=False
+-_B=None
+-_A=True
+-import copy,os,re
+-from collections import abc
+-from .core import Argument
+-from .core import MultiCommand
+-from .core import Option
+-from .parser import split_arg_string
+-from .types import Choice
+-from .utils import echo
+-WORDBREAK='='
+-COMPLETION_SCRIPT_BASH='\n%(complete_func)s() {\n    local IFS=$\'\n\'\n    COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\\n                   COMP_CWORD=$COMP_CWORD \\\n                   %(autocomplete_var)s=complete $1 ) )\n    return 0\n}\n\n%(complete_func)setup() {\n    local COMPLETION_OPTIONS=""\n    local BASH_VERSION_ARR=(${BASH_VERSION//./ })\n    # Only BASH version 4.4 and later have the nosort option.\n    if [ ${BASH_VERSION_ARR[0]} -gt 4 ] || ([ ${BASH_VERSION_ARR[0]} -eq 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ]); then\n        COMPLETION_OPTIONS="-o nosort"\n    fi\n\n    complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s\n}\n\n%(complete_func)setup\n'
+-COMPLETION_SCRIPT_ZSH='\n#compdef %(script_names)s\n\n%(complete_func)s() {\n    local -a completions\n    local -a completions_with_descriptions\n    local -a response\n    (( ! $+commands[%(script_names)s] )) && return 1\n\n    response=("${(@f)$( env COMP_WORDS="${words[*]}" \\\n                        COMP_CWORD=$((CURRENT-1)) \\\n                        %(autocomplete_var)s="complete_zsh" \\\n                        %(script_names)s )}")\n\n    for key descr in ${(kv)response}; do\n      if [[ "$descr" == "_" ]]; then\n          completions+=("$key")\n      else\n          completions_with_descriptions+=("$key":"$descr")\n      fi\n    done\n\n    if [ -n "$completions_with_descriptions" ]; then\n        _describe -V unsorted completions_with_descriptions -U\n    fi\n\n    if [ -n "$completions" ]; then\n        compadd -U -V unsorted -a completions\n    fi\n    compstate[insert]="automenu"\n}\n\ncompdef %(complete_func)s %(script_names)s\n'
+-COMPLETION_SCRIPT_FISH='complete --no-files --command %(script_names)s --arguments "(env %(autocomplete_var)s=complete_fish COMP_WORDS=(commandline -cp) COMP_CWORD=(commandline -t) %(script_names)s)"'
+-_completion_scripts={_E:COMPLETION_SCRIPT_BASH,_F:COMPLETION_SCRIPT_ZSH,_G:COMPLETION_SCRIPT_FISH}
+-_invalid_ident_char_re=re.compile('[^a-zA-Z0-9_]')
+-def get_completion_script(prog_name,complete_var,shell):A=prog_name;B=_invalid_ident_char_re.sub('',A.replace('-',_D));C=_completion_scripts.get(shell,COMPLETION_SCRIPT_BASH);return (C%{'complete_func':f"_{B}_completion",'script_names':A,'autocomplete_var':complete_var}).strip()+';'
+-def resolve_ctx(cli,prog_name,args):
+-	B=args;A=cli.make_context(prog_name,B,resilient_parsing=_A);B=A.protected_args+A.args
+-	while B:
+-		if isinstance(A.command,MultiCommand):
+-			if not A.command.chain:
+-				E,C,B=A.command.resolve_command(A,B)
+-				if C is _B:return A
+-				A=C.make_context(E,B,parent=A,resilient_parsing=_A);B=A.protected_args+A.args
+-			else:
+-				while B:
+-					E,C,B=A.command.resolve_command(A,B)
+-					if C is _B:return A
+-					D=C.make_context(E,B,parent=A,allow_extra_args=_A,allow_interspersed_args=_C,resilient_parsing=_A);B=D.args
+-				A=D;B=D.protected_args+D.args
+-		else:break
+-	return A
+-def start_of_option(param_str):A=param_str;return A and A[:1]=='-'
+-def is_incomplete_option(all_args,cmd_param):
+-	A=cmd_param
+-	if not isinstance(A,Option):return _C
+-	if A.is_flag:return _C
+-	B=_B
+-	for (D,C) in enumerate(reversed([A for A in all_args if A!=WORDBREAK])):
+-		if D+1>A.nargs:break
+-		if start_of_option(C):B=C
+-	return _A if B and B in A.opts else _C
+-def is_incomplete_argument(current_params,cmd_param):
+-	A=cmd_param
+-	if not isinstance(A,Argument):return _C
+-	B=current_params[A.name]
+-	if B is _B:return _A
+-	if A.nargs==-1:return _A
+-	if isinstance(B,abc.Iterable)and A.nargs>1 and len(B)<A.nargs:return _A
+-	return _C
+-def get_user_autocompletions(ctx,args,incomplete,cmd_param):
+-	C=incomplete;A=cmd_param;B=[]
+-	if isinstance(A.type,Choice):B=[(B,_B)for B in A.type.choices if str(B).startswith(C)]
+-	elif A.autocompletion is not _B:D=A.autocompletion(ctx=ctx,args=args,incomplete=C);B=[A if isinstance(A,tuple)else(A,_B)for A in D]
+-	return B
+-def get_visible_commands_starting_with(ctx,starts_with):
+-	A=ctx
+-	for B in A.command.list_commands(A):
+-		if B.startswith(starts_with):
+-			C=A.command.get_command(A,B)
+-			if not C.hidden:yield C
+-def add_subcommand_completions(ctx,incomplete,completions_out):
+-	C=completions_out;B=incomplete;A=ctx
+-	if isinstance(A.command,MultiCommand):C.extend([(C.name,C.get_short_help_str())for C in get_visible_commands_starting_with(A,B)])
+-	while A.parent is not _B:
+-		A=A.parent
+-		if isinstance(A.command,MultiCommand)and A.command.chain:D=[C for C in get_visible_commands_starting_with(A,B)if C.name not in A.protected_args];C.extend([(A.name,A.get_short_help_str())for A in D])
+-def get_choices(cli,prog_name,args,incomplete):
+-	B=incomplete;D=copy.deepcopy(args);C=resolve_ctx(cli,prog_name,args)
+-	if C is _B:return[]
+-	G='--'in D
+-	if start_of_option(B)and WORDBREAK in B:F=B.partition(WORDBREAK);D.append(F[0]);B=F[2]
+-	elif B==WORDBREAK:B=''
+-	E=[]
+-	if not G and start_of_option(B):
+-		for A in C.command.params:
+-			if isinstance(A,Option)and not A.hidden:H=[B for B in A.opts+A.secondary_opts if B not in D or A.multiple];E.extend([(C,A.help)for C in H if C.startswith(B)])
+-		return E
+-	for A in C.command.params:
+-		if is_incomplete_option(D,A):return get_user_autocompletions(C,D,B,A)
+-	for A in C.command.params:
+-		if is_incomplete_argument(C.params,A):return get_user_autocompletions(C,D,B,A)
+-	add_subcommand_completions(C,B,E);return sorted(E)
+-def do_complete(cli,prog_name,include_descriptions):
+-	B=split_arg_string(os.environ[_H]);C=int(os.environ[_I]);E=B[1:C]
+-	try:D=B[C]
+-	except IndexError:D=''
+-	for A in get_choices(cli,prog_name,E,D):
+-		echo(A[0])
+-		if include_descriptions:echo(A[1]if A[1]else _D)
+-	return _A
+-def do_complete_fish(cli,prog_name):
+-	B=split_arg_string(os.environ[_H]);C=os.environ[_I];D=B[1:]
+-	for A in get_choices(cli,prog_name,D,C):
+-		if A[1]:echo(f"{A[0]}\t{A[1]}")
+-		else:echo(A[0])
+-	return _A
+-def bashcomplete(cli,prog_name,complete_var,complete_instr):
+-	C=complete_instr;B=prog_name
+-	if _D in C:D,A=C.split(_D,1)
+-	else:D=C;A=_E
+-	if D=='source':echo(get_completion_script(B,complete_var,A));return _A
+-	elif D=='complete':
+-		if A==_G:return do_complete_fish(cli,B)
+-		elif A in{_E,_F}:return do_complete(cli,B,A==_F)
+-	return _C
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/_compat.py b/dynaconf/vendor/click/_compat.py
+deleted file mode 100644
+index f1eb2b4..0000000
+--- a/dynaconf/vendor/click/_compat.py
++++ /dev/null
+@@ -1,240 +0,0 @@
+-_L='stderr'
+-_K='stdout'
+-_J='stdin'
+-_I='buffer'
+-_H='ascii'
+-_G='win'
+-_F='utf-8'
+-_E='encoding'
+-_D='replace'
+-_C=True
+-_B=False
+-_A=None
+-import codecs,io,os,re,sys
+-from weakref import WeakKeyDictionary
+-CYGWIN=sys.platform.startswith('cygwin')
+-MSYS2=sys.platform.startswith(_G)and'GCC'in sys.version
+-APP_ENGINE='APPENGINE_RUNTIME'in os.environ and'Development/'in os.environ.get('SERVER_SOFTWARE','')
+-WIN=sys.platform.startswith(_G)and not APP_ENGINE and not MSYS2
+-DEFAULT_COLUMNS=80
+-auto_wrap_for_ansi=_A
+-colorama=_A
+-get_winterm_size=_A
+-_ansi_re=re.compile('\\033\\[[;?0-9]*[a-zA-Z]')
+-def get_filesystem_encoding():return sys.getfilesystemencoding()or sys.getdefaultencoding()
+-def _make_text_stream(stream,encoding,errors,force_readable=_B,force_writable=_B):
+-	C=stream;B=errors;A=encoding
+-	if A is _A:A=get_best_encoding(C)
+-	if B is _A:B=_D
+-	return _NonClosingTextIOWrapper(C,A,B,line_buffering=_C,force_readable=force_readable,force_writable=force_writable)
+-def is_ascii_encoding(encoding):
+-	try:return codecs.lookup(encoding).name==_H
+-	except LookupError:return _B
+-def get_best_encoding(stream):
+-	A=getattr(stream,_E,_A)or sys.getdefaultencoding()
+-	if is_ascii_encoding(A):return _F
+-	return A
+-class _NonClosingTextIOWrapper(io.TextIOWrapper):
+-	def __init__(B,stream,encoding,errors,force_readable=_B,force_writable=_B,**C):A=stream;B._stream=A=_FixupStream(A,force_readable,force_writable);super().__init__(A,encoding,errors,**C)
+-	def __del__(A):
+-		try:A.detach()
+-		except Exception:pass
+-	def isatty(A):return A._stream.isatty()
+-class _FixupStream:
+-	def __init__(A,stream,force_readable=_B,force_writable=_B):A._stream=stream;A._force_readable=force_readable;A._force_writable=force_writable
+-	def __getattr__(A,name):return getattr(A._stream,name)
+-	def read1(A,size):
+-		B=getattr(A._stream,'read1',_A)
+-		if B is not _A:return B(size)
+-		return A._stream.read(size)
+-	def readable(A):
+-		if A._force_readable:return _C
+-		B=getattr(A._stream,'readable',_A)
+-		if B is not _A:return B()
+-		try:A._stream.read(0)
+-		except Exception:return _B
+-		return _C
+-	def writable(A):
+-		if A._force_writable:return _C
+-		B=getattr(A._stream,'writable',_A)
+-		if B is not _A:return B()
+-		try:A._stream.write('')
+-		except Exception:
+-			try:A._stream.write(b'')
+-			except Exception:return _B
+-		return _C
+-	def seekable(A):
+-		B=getattr(A._stream,'seekable',_A)
+-		if B is not _A:return B()
+-		try:A._stream.seek(A._stream.tell())
+-		except Exception:return _B
+-		return _C
+-def is_bytes(x):return isinstance(x,(bytes,memoryview,bytearray))
+-def _is_binary_reader(stream,default=_B):
+-	try:return isinstance(stream.read(0),bytes)
+-	except Exception:return default
+-def _is_binary_writer(stream,default=_B):
+-	A=stream
+-	try:A.write(b'')
+-	except Exception:
+-		try:A.write('');return _B
+-		except Exception:pass
+-		return default
+-	return _C
+-def _find_binary_reader(stream):
+-	A=stream
+-	if _is_binary_reader(A,_B):return A
+-	B=getattr(A,_I,_A)
+-	if B is not _A and _is_binary_reader(B,_C):return B
+-def _find_binary_writer(stream):
+-	A=stream
+-	if _is_binary_writer(A,_B):return A
+-	B=getattr(A,_I,_A)
+-	if B is not _A and _is_binary_writer(B,_C):return B
+-def _stream_is_misconfigured(stream):return is_ascii_encoding(getattr(stream,_E,_A)or _H)
+-def _is_compat_stream_attr(stream,attr,value):A=value;B=getattr(stream,attr,_A);return B==A or A is _A and B is not _A
+-def _is_compatible_text_stream(stream,encoding,errors):A=stream;return _is_compat_stream_attr(A,_E,encoding)and _is_compat_stream_attr(A,'errors',errors)
+-def _force_correct_text_stream(text_stream,encoding,errors,is_binary,find_binary,force_readable=_B,force_writable=_B):
+-	C=encoding;B=errors;A=text_stream
+-	if is_binary(A,_B):D=A
+-	else:
+-		if _is_compatible_text_stream(A,C,B)and not(C is _A and _stream_is_misconfigured(A)):return A
+-		D=find_binary(A)
+-		if D is _A:return A
+-	if B is _A:B=_D
+-	return _make_text_stream(D,C,B,force_readable=force_readable,force_writable=force_writable)
+-def _force_correct_text_reader(text_reader,encoding,errors,force_readable=_B):return _force_correct_text_stream(text_reader,encoding,errors,_is_binary_reader,_find_binary_reader,force_readable=force_readable)
+-def _force_correct_text_writer(text_writer,encoding,errors,force_writable=_B):return _force_correct_text_stream(text_writer,encoding,errors,_is_binary_writer,_find_binary_writer,force_writable=force_writable)
+-def get_binary_stdin():
+-	A=_find_binary_reader(sys.stdin)
+-	if A is _A:raise RuntimeError('Was not able to determine binary stream for sys.stdin.')
+-	return A
+-def get_binary_stdout():
+-	A=_find_binary_writer(sys.stdout)
+-	if A is _A:raise RuntimeError('Was not able to determine binary stream for sys.stdout.')
+-	return A
+-def get_binary_stderr():
+-	A=_find_binary_writer(sys.stderr)
+-	if A is _A:raise RuntimeError('Was not able to determine binary stream for sys.stderr.')
+-	return A
+-def get_text_stdin(encoding=_A,errors=_A):
+-	B=errors;A=encoding;C=_get_windows_console_stream(sys.stdin,A,B)
+-	if C is not _A:return C
+-	return _force_correct_text_reader(sys.stdin,A,B,force_readable=_C)
+-def get_text_stdout(encoding=_A,errors=_A):
+-	B=errors;A=encoding;C=_get_windows_console_stream(sys.stdout,A,B)
+-	if C is not _A:return C
+-	return _force_correct_text_writer(sys.stdout,A,B,force_writable=_C)
+-def get_text_stderr(encoding=_A,errors=_A):
+-	B=errors;A=encoding;C=_get_windows_console_stream(sys.stderr,A,B)
+-	if C is not _A:return C
+-	return _force_correct_text_writer(sys.stderr,A,B,force_writable=_C)
+-def filename_to_ui(value):
+-	A=value
+-	if isinstance(A,bytes):A=A.decode(get_filesystem_encoding(),_D)
+-	else:A=A.encode(_F,'surrogateescape').decode(_F,_D)
+-	return A
+-def get_strerror(e,default=_A):
+-	B=default
+-	if hasattr(e,'strerror'):A=e.strerror
+-	elif B is not _A:A=B
+-	else:A=str(e)
+-	if isinstance(A,bytes):A=A.decode(_F,_D)
+-	return A
+-def _wrap_io_open(file,mode,encoding,errors):
+-	A=mode
+-	if'b'in A:return open(file,A)
+-	return open(file,A,encoding=encoding,errors=errors)
+-def open_stream(filename,mode='r',encoding=_A,errors='strict',atomic=_B):
+-	P='x';O='a';N='w';E=errors;D=encoding;B=filename;A=mode;G='b'in A
+-	if B=='-':
+-		if any((B in A for B in[N,O,P])):
+-			if G:return get_binary_stdout(),_B
+-			return get_text_stdout(encoding=D,errors=E),_B
+-		if G:return get_binary_stdin(),_B
+-		return get_text_stdin(encoding=D,errors=E),_B
+-	if not atomic:return _wrap_io_open(B,A,D,E),_C
+-	if O in A:raise ValueError("Appending to an existing file is not supported, because that would involve an expensive `copy`-operation to a temporary file. Open the file in normal `w`-mode and copy explicitly if that's what you're after.")
+-	if P in A:raise ValueError('Use the `overwrite`-parameter instead.')
+-	if N not in A:raise ValueError('Atomic writes only make sense with `w`-mode.')
+-	import errno as I,random as K
+-	try:C=os.stat(B).st_mode
+-	except OSError:C=_A
+-	J=os.O_RDWR|os.O_CREAT|os.O_EXCL
+-	if G:J|=getattr(os,'O_BINARY',0)
+-	while _C:
+-		H=os.path.join(os.path.dirname(B),f".__atomic-write{K.randrange(1<<32):08x}")
+-		try:L=os.open(H,J,438 if C is _A else C);break
+-		except OSError as F:
+-			if F.errno==I.EEXIST or os.name=='nt'and F.errno==I.EACCES and os.path.isdir(F.filename)and os.access(F.filename,os.W_OK):continue
+-			raise
+-	if C is not _A:os.chmod(H,C)
+-	M=_wrap_io_open(L,A,D,E);return _AtomicFile(M,H,os.path.realpath(B)),_C
+-class _AtomicFile:
+-	def __init__(A,f,tmp_filename,real_filename):A._f=f;A._tmp_filename=tmp_filename;A._real_filename=real_filename;A.closed=_B
+-	@property
+-	def name(self):return self._real_filename
+-	def close(A,delete=_B):
+-		if A.closed:return
+-		A._f.close();os.replace(A._tmp_filename,A._real_filename);A.closed=_C
+-	def __getattr__(A,name):return getattr(A._f,name)
+-	def __enter__(A):return A
+-	def __exit__(A,exc_type,exc_value,tb):A.close(delete=exc_type is not _A)
+-	def __repr__(A):return repr(A._f)
+-def strip_ansi(value):return _ansi_re.sub('',value)
+-def _is_jupyter_kernel_output(stream):
+-	A=stream
+-	if WIN:return
+-	while isinstance(A,(_FixupStream,_NonClosingTextIOWrapper)):A=A._stream
+-	return A.__class__.__module__.startswith('ipykernel.')
+-def should_strip_ansi(stream=_A,color=_A):
+-	B=color;A=stream
+-	if B is _A:
+-		if A is _A:A=sys.stdin
+-		return not isatty(A)and not _is_jupyter_kernel_output(A)
+-	return not B
+-if WIN:
+-	DEFAULT_COLUMNS=79;from ._winconsole import _get_windows_console_stream
+-	def _get_argv_encoding():import locale as A;return A.getpreferredencoding()
+-	try:import colorama
+-	except ImportError:pass
+-	else:
+-		_ansi_stream_wrappers=WeakKeyDictionary()
+-		def auto_wrap_for_ansi(stream,color=_A):
+-			A=stream
+-			try:C=_ansi_stream_wrappers.get(A)
+-			except Exception:C=_A
+-			if C is not _A:return C
+-			E=should_strip_ansi(A,color);D=colorama.AnsiToWin32(A,strip=E);B=D.stream;F=B.write
+-			def G(s):
+-				try:return F(s)
+-				except BaseException:D.reset_all();raise
+-			B.write=G
+-			try:_ansi_stream_wrappers[A]=B
+-			except Exception:pass
+-			return B
+-		def get_winterm_size():A=colorama.win32.GetConsoleScreenBufferInfo(colorama.win32.STDOUT).srWindow;return A.Right-A.Left,A.Bottom-A.Top
+-else:
+-	def _get_argv_encoding():return getattr(sys.stdin,_E,_A)or get_filesystem_encoding()
+-	def _get_windows_console_stream(f,encoding,errors):return _A
+-def term_len(x):return len(strip_ansi(x))
+-def isatty(stream):
+-	try:return stream.isatty()
+-	except Exception:return _B
+-def _make_cached_stream_func(src_func,wrapper_func):
+-	C=src_func;D=WeakKeyDictionary()
+-	def A():
+-		B=C()
+-		try:A=D.get(B)
+-		except Exception:A=_A
+-		if A is not _A:return A
+-		A=wrapper_func()
+-		try:B=C();D[B]=A
+-		except Exception:pass
+-		return A
+-	return A
+-_default_text_stdin=_make_cached_stream_func(lambda:sys.stdin,get_text_stdin)
+-_default_text_stdout=_make_cached_stream_func(lambda:sys.stdout,get_text_stdout)
+-_default_text_stderr=_make_cached_stream_func(lambda:sys.stderr,get_text_stderr)
+-binary_streams={_J:get_binary_stdin,_K:get_binary_stdout,_L:get_binary_stderr}
+-text_streams={_J:get_text_stdin,_K:get_text_stdout,_L:get_text_stderr}
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/_termui_impl.py b/dynaconf/vendor/click/_termui_impl.py
+deleted file mode 100644
+index b18a9f2..0000000
+--- a/dynaconf/vendor/click/_termui_impl.py
++++ /dev/null
+@@ -1,262 +0,0 @@
+-_H='replace'
+-_G='less'
+-_F='You need to use progress bars in a with block.'
+-_E=' '
+-_D='\n'
+-_C=False
+-_B=True
+-_A=None
+-import contextlib,math,os,sys,time
+-from ._compat import _default_text_stdout,CYGWIN,get_best_encoding,isatty,open_stream,strip_ansi,term_len,WIN
+-from .exceptions import ClickException
+-from .utils import echo
+-if os.name=='nt':BEFORE_BAR='\r';AFTER_BAR=_D
+-else:BEFORE_BAR='\r\x1b[?25l';AFTER_BAR='\x1b[?25h\n'
+-def _length_hint(obj):
+-	B=obj
+-	try:return len(B)
+-	except (AttributeError,TypeError):
+-		try:C=type(B).__length_hint__
+-		except AttributeError:return _A
+-		try:A=C(B)
+-		except TypeError:return _A
+-		if A is NotImplemented or not isinstance(A,int)or A<0:return _A
+-		return A
+-class ProgressBar:
+-	def __init__(A,iterable,length=_A,fill_char='#',empty_char=_E,bar_template='%(bar)s',info_sep='  ',show_eta=_B,show_percent=_A,show_pos=_C,item_show_func=_A,label=_A,file=_A,color=_A,width=30):
+-		E=width;D=file;C=iterable;B=length;A.fill_char=fill_char;A.empty_char=empty_char;A.bar_template=bar_template;A.info_sep=info_sep;A.show_eta=show_eta;A.show_percent=show_percent;A.show_pos=show_pos;A.item_show_func=item_show_func;A.label=label or''
+-		if D is _A:D=_default_text_stdout()
+-		A.file=D;A.color=color;A.width=E;A.autowidth=E==0
+-		if B is _A:B=_length_hint(C)
+-		if C is _A:
+-			if B is _A:raise TypeError('iterable or length is required')
+-			C=range(B)
+-		A.iter=iter(C);A.length=B;A.length_known=B is not _A;A.pos=0;A.avg=[];A.start=A.last_eta=time.time();A.eta_known=_C;A.finished=_C;A.max_width=_A;A.entered=_C;A.current_item=_A;A.is_hidden=not isatty(A.file);A._last_line=_A;A.short_limit=0.5
+-	def __enter__(A):A.entered=_B;A.render_progress();return A
+-	def __exit__(A,exc_type,exc_value,tb):A.render_finish()
+-	def __iter__(A):
+-		if not A.entered:raise RuntimeError(_F)
+-		A.render_progress();return A.generator()
+-	def __next__(A):return next(iter(A))
+-	def is_fast(A):return time.time()-A.start<=A.short_limit
+-	def render_finish(A):
+-		if A.is_hidden or A.is_fast():return
+-		A.file.write(AFTER_BAR);A.file.flush()
+-	@property
+-	def pct(self):
+-		A=self
+-		if A.finished:return 1.0
+-		return min(A.pos/(float(A.length)or 1),1.0)
+-	@property
+-	def time_per_iteration(self):
+-		A=self
+-		if not A.avg:return 0.0
+-		return sum(A.avg)/float(len(A.avg))
+-	@property
+-	def eta(self):
+-		A=self
+-		if A.length_known and not A.finished:return A.time_per_iteration*(A.length-A.pos)
+-		return 0.0
+-	def format_eta(B):
+-		if B.eta_known:
+-			A=int(B.eta);C=A%60;A//=60;D=A%60;A//=60;E=A%24;A//=24
+-			if A>0:return f"{A}d {E:02}:{D:02}:{C:02}"
+-			else:return f"{E:02}:{D:02}:{C:02}"
+-		return''
+-	def format_pos(A):
+-		B=str(A.pos)
+-		if A.length_known:B+=f"/{A.length}"
+-		return B
+-	def format_pct(A):return f"{int(A.pct*100): 4}%"[1:]
+-	def format_bar(A):
+-		if A.length_known:C=int(A.pct*A.width);B=A.fill_char*C;B+=A.empty_char*(A.width-C)
+-		elif A.finished:B=A.fill_char*A.width
+-		else:
+-			B=list(A.empty_char*(A.width or 1))
+-			if A.time_per_iteration!=0:B[int((math.cos(A.pos*A.time_per_iteration)/2.0+0.5)*A.width)]=A.fill_char
+-			B=''.join(B)
+-		return B
+-	def format_progress_line(A):
+-		C=A.show_percent;B=[]
+-		if A.length_known and C is _A:C=not A.show_pos
+-		if A.show_pos:B.append(A.format_pos())
+-		if C:B.append(A.format_pct())
+-		if A.show_eta and A.eta_known and not A.finished:B.append(A.format_eta())
+-		if A.item_show_func is not _A:
+-			D=A.item_show_func(A.current_item)
+-			if D is not _A:B.append(D)
+-		return (A.bar_template%{'label':A.label,'bar':A.format_bar(),'info':A.info_sep.join(B)}).rstrip()
+-	def render_progress(A):
+-		from .termui import get_terminal_size as G
+-		if A.is_hidden:return
+-		B=[]
+-		if A.autowidth:
+-			H=A.width;A.width=0;I=term_len(A.format_progress_line());D=max(0,G()[0]-I)
+-			if D<H:B.append(BEFORE_BAR);B.append(_E*A.max_width);A.max_width=D
+-			A.width=D
+-		F=A.width
+-		if A.max_width is not _A:F=A.max_width
+-		B.append(BEFORE_BAR);C=A.format_progress_line();E=term_len(C)
+-		if A.max_width is _A or A.max_width<E:A.max_width=E
+-		B.append(C);B.append(_E*(F-E));C=''.join(B)
+-		if C!=A._last_line and not A.is_fast():A._last_line=C;echo(C,file=A.file,color=A.color,nl=_C);A.file.flush()
+-	def make_step(A,n_steps):
+-		A.pos+=n_steps
+-		if A.length_known and A.pos>=A.length:A.finished=_B
+-		if time.time()-A.last_eta<1.0:return
+-		A.last_eta=time.time()
+-		if A.pos:B=(time.time()-A.start)/A.pos
+-		else:B=time.time()-A.start
+-		A.avg=A.avg[-6:]+[B];A.eta_known=A.length_known
+-	def update(A,n_steps,current_item=_A):
+-		B=current_item;A.make_step(n_steps)
+-		if B is not _A:A.current_item=B
+-		A.render_progress()
+-	def finish(A):A.eta_known=0;A.current_item=_A;A.finished=_B
+-	def generator(A):
+-		if not A.entered:raise RuntimeError(_F)
+-		if A.is_hidden:yield from A.iter
+-		else:
+-			for B in A.iter:A.current_item=B;yield B;A.update(1)
+-			A.finish();A.render_progress()
+-def pager(generator,color=_A):
+-	H='system';B=color;A=generator;C=_default_text_stdout()
+-	if not isatty(sys.stdin)or not isatty(C):return _nullpager(C,A,B)
+-	D=(os.environ.get('PAGER',_A)or'').strip()
+-	if D:
+-		if WIN:return _tempfilepager(A,D,B)
+-		return _pipepager(A,D,B)
+-	if os.environ.get('TERM')in('dumb','emacs'):return _nullpager(C,A,B)
+-	if WIN or sys.platform.startswith('os2'):return _tempfilepager(A,'more <',B)
+-	if hasattr(os,H)and os.system('(less) 2>/dev/null')==0:return _pipepager(A,_G,B)
+-	import tempfile as F;G,E=F.mkstemp();os.close(G)
+-	try:
+-		if hasattr(os,H)and os.system(f'more "{E}"')==0:return _pipepager(A,'more',B)
+-		return _nullpager(C,A,B)
+-	finally:os.unlink(E)
+-def _pipepager(generator,cmd,color):
+-	I='LESS';A=color;import subprocess as E;F=dict(os.environ);G=cmd.rsplit('/',1)[-1].split()
+-	if A is _A and G[0]==_G:
+-		C=f"{os.environ.get(I,'')}{_E.join(G[1:])}"
+-		if not C:F[I]='-R';A=_B
+-		elif'r'in C or'R'in C:A=_B
+-	B=E.Popen(cmd,shell=_B,stdin=E.PIPE,env=F);H=get_best_encoding(B.stdin)
+-	try:
+-		for D in generator:
+-			if not A:D=strip_ansi(D)
+-			B.stdin.write(D.encode(H,_H))
+-	except (OSError,KeyboardInterrupt):pass
+-	else:B.stdin.close()
+-	while _B:
+-		try:B.wait()
+-		except KeyboardInterrupt:pass
+-		else:break
+-def _tempfilepager(generator,cmd,color):
+-	import tempfile as C;A=C.mktemp();B=''.join(generator)
+-	if not color:B=strip_ansi(B)
+-	D=get_best_encoding(sys.stdout)
+-	with open_stream(A,'wb')[0]as E:E.write(B.encode(D))
+-	try:os.system(f'{cmd} "{A}"')
+-	finally:os.unlink(A)
+-def _nullpager(stream,generator,color):
+-	for A in generator:
+-		if not color:A=strip_ansi(A)
+-		stream.write(A)
+-class Editor:
+-	def __init__(A,editor=_A,env=_A,require_save=_B,extension='.txt'):A.editor=editor;A.env=env;A.require_save=require_save;A.extension=extension
+-	def get_editor(A):
+-		if A.editor is not _A:return A.editor
+-		for D in ('VISUAL','EDITOR'):
+-			B=os.environ.get(D)
+-			if B:return B
+-		if WIN:return'notepad'
+-		for C in ('sensible-editor','vim','nano'):
+-			if os.system(f"which {C} >/dev/null 2>&1")==0:return C
+-		return'vi'
+-	def edit_file(A,filename):
+-		import subprocess as D;B=A.get_editor()
+-		if A.env:C=os.environ.copy();C.update(A.env)
+-		else:C=_A
+-		try:
+-			E=D.Popen(f'{B} "{filename}"',env=C,shell=_B);F=E.wait()
+-			if F!=0:raise ClickException(f"{B}: Editing failed!")
+-		except OSError as G:raise ClickException(f"{B}: Editing failed: {G}")
+-	def edit(D,text):
+-		L='\r\n';K='utf-8-sig';A=text;import tempfile as H;A=A or'';E=type(A)in[bytes,bytearray]
+-		if not E and A and not A.endswith(_D):A+=_D
+-		I,B=H.mkstemp(prefix='editor-',suffix=D.extension)
+-		try:
+-			if not E:
+-				if WIN:F=K;A=A.replace(_D,L)
+-				else:F='utf-8'
+-				A=A.encode(F)
+-			C=os.fdopen(I,'wb');C.write(A);C.close();J=os.path.getmtime(B);D.edit_file(B)
+-			if D.require_save and os.path.getmtime(B)==J:return _A
+-			C=open(B,'rb')
+-			try:G=C.read()
+-			finally:C.close()
+-			if E:return G
+-			else:return G.decode(K).replace(L,_D)
+-		finally:os.unlink(B)
+-def open_url(url,wait=_C,locate=_C):
+-	F='"';D=locate;C=wait;A=url;import subprocess as G
+-	def E(url):
+-		A=url;import urllib as B
+-		if A.startswith('file://'):A=B.unquote(A[7:])
+-		return A
+-	if sys.platform=='darwin':
+-		B=['open']
+-		if C:B.append('-W')
+-		if D:B.append('-R')
+-		B.append(E(A));H=open('/dev/null','w')
+-		try:return G.Popen(B,stderr=H).wait()
+-		finally:H.close()
+-	elif WIN:
+-		if D:A=E(A.replace(F,''));B=f'explorer /select,"{A}"'
+-		else:A=A.replace(F,'');C='/WAIT'if C else'';B=f'start {C} "" "{A}"'
+-		return os.system(B)
+-	elif CYGWIN:
+-		if D:A=os.path.dirname(E(A).replace(F,''));B=f'cygstart "{A}"'
+-		else:A=A.replace(F,'');C='-w'if C else'';B=f'cygstart {C} "{A}"'
+-		return os.system(B)
+-	try:
+-		if D:A=os.path.dirname(E(A))or'.'
+-		else:A=E(A)
+-		I=G.Popen(['xdg-open',A])
+-		if C:return I.wait()
+-		return 0
+-	except OSError:
+-		if A.startswith(('http://','https://'))and not D and not C:import webbrowser as J;J.open(A);return 0
+-		return 1
+-def _translate_ch_to_exc(ch):
+-	if ch=='\x03':raise KeyboardInterrupt()
+-	if ch=='\x04'and not WIN:raise EOFError()
+-	if ch=='\x1a'and WIN:raise EOFError()
+-if WIN:
+-	import msvcrt
+-	@contextlib.contextmanager
+-	def raw_terminal():yield
+-	def getchar(echo):
+-		if echo:B=msvcrt.getwche
+-		else:B=msvcrt.getwch
+-		A=B()
+-		if A in('\x00','à'):A+=B()
+-		_translate_ch_to_exc(A);return A
+-else:
+-	import tty,termios
+-	@contextlib.contextmanager
+-	def raw_terminal():
+-		if not isatty(sys.stdin):B=open('/dev/tty');A=B.fileno()
+-		else:A=sys.stdin.fileno();B=_A
+-		try:
+-			C=termios.tcgetattr(A)
+-			try:tty.setraw(A);yield A
+-			finally:
+-				termios.tcsetattr(A,termios.TCSADRAIN,C);sys.stdout.flush()
+-				if B is not _A:B.close()
+-		except termios.error:pass
+-	def getchar(echo):
+-		with raw_terminal()as B:
+-			A=os.read(B,32);A=A.decode(get_best_encoding(sys.stdin),_H)
+-			if echo and isatty(sys.stdout):sys.stdout.write(A)
+-			_translate_ch_to_exc(A);return A
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/_textwrap.py b/dynaconf/vendor/click/_textwrap.py
+deleted file mode 100644
+index b02fced..0000000
+--- a/dynaconf/vendor/click/_textwrap.py
++++ /dev/null
+@@ -1,19 +0,0 @@
+-import textwrap
+-from contextlib import contextmanager
+-class TextWrapper(textwrap.TextWrapper):
+-	def _handle_long_word(E,reversed_chunks,cur_line,cur_len,width):
+-		B=cur_line;A=reversed_chunks;C=max(width-cur_len,1)
+-		if E.break_long_words:D=A[-1];F=D[:C];G=D[C:];B.append(F);A[-1]=G
+-		elif not B:B.append(A.pop())
+-	@contextmanager
+-	def extra_indent(self,indent):
+-		B=indent;A=self;C=A.initial_indent;D=A.subsequent_indent;A.initial_indent+=B;A.subsequent_indent+=B
+-		try:yield
+-		finally:A.initial_indent=C;A.subsequent_indent=D
+-	def indent_only(A,text):
+-		B=[]
+-		for (D,E) in enumerate(text.splitlines()):
+-			C=A.initial_indent
+-			if D>0:C=A.subsequent_indent
+-			B.append(f"{C}{E}")
+-		return '\n'.join(B)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/_unicodefun.py b/dynaconf/vendor/click/_unicodefun.py
+deleted file mode 100644
+index 792053f..0000000
+--- a/dynaconf/vendor/click/_unicodefun.py
++++ /dev/null
+@@ -1,28 +0,0 @@
+-import codecs,os
+-def _verify_python_env():
+-	M='.utf8';L='.utf-8';J=None;I='ascii'
+-	try:import locale as A;G=codecs.lookup(A.getpreferredencoding()).name
+-	except Exception:G=I
+-	if G!=I:return
+-	B=''
+-	if os.name=='posix':
+-		import subprocess as D
+-		try:C=D.Popen(['locale','-a'],stdout=D.PIPE,stderr=D.PIPE).communicate()[0]
+-		except OSError:C=b''
+-		E=set();H=False
+-		if isinstance(C,bytes):C=C.decode(I,'replace')
+-		for K in C.splitlines():
+-			A=K.strip()
+-			if A.lower().endswith((L,M)):
+-				E.add(A)
+-				if A.lower()in('c.utf8','c.utf-8'):H=True
+-		B+='\n\n'
+-		if not E:B+='Additional information: on this system no suitable UTF-8 locales were discovered. This most likely requires resolving by reconfiguring the locale system.'
+-		elif H:B+='This system supports the C.UTF-8 locale which is recommended. You might be able to resolve your issue by exporting the following environment variables:\n\n    export LC_ALL=C.UTF-8\n    export LANG=C.UTF-8'
+-		else:B+=f"This system lists some UTF-8 supporting locales that you can pick from. The following suitable locales were discovered: {', '.join(sorted(E))}"
+-		F=J
+-		for A in (os.environ.get('LC_ALL'),os.environ.get('LANG')):
+-			if A and A.lower().endswith((L,M)):F=A
+-			if A is not J:break
+-		if F is not J:B+=f"\n\nClick discovered that you exported a UTF-8 locale but the locale system could not pick up from it because it does not exist. The exported locale is {F!r} but it is not supported"
+-	raise RuntimeError(f"Click will abort further execution because Python was configured to use ASCII as encoding for the environment. Consult https://click.palletsprojects.com/unicode-support/ for mitigation steps.{B}")
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/_winconsole.py b/dynaconf/vendor/click/_winconsole.py
+deleted file mode 100644
+index 316b252..0000000
+--- a/dynaconf/vendor/click/_winconsole.py
++++ /dev/null
+@@ -1,108 +0,0 @@
+-_E=False
+-_D='strict'
+-_C='utf-16-le'
+-_B=True
+-_A=None
+-import ctypes,io,time
+-from ctypes import byref,c_char,c_char_p,c_int,c_ssize_t,c_ulong,c_void_p,POINTER,py_object,windll,WINFUNCTYPE
+-from ctypes.wintypes import DWORD
+-from ctypes.wintypes import HANDLE
+-from ctypes.wintypes import LPCWSTR
+-from ctypes.wintypes import LPWSTR
+-import msvcrt
+-from ._compat import _NonClosingTextIOWrapper
+-try:from ctypes import pythonapi
+-except ImportError:pythonapi=_A
+-else:PyObject_GetBuffer=pythonapi.PyObject_GetBuffer;PyBuffer_Release=pythonapi.PyBuffer_Release
+-c_ssize_p=POINTER(c_ssize_t)
+-kernel32=windll.kernel32
+-GetStdHandle=kernel32.GetStdHandle
+-ReadConsoleW=kernel32.ReadConsoleW
+-WriteConsoleW=kernel32.WriteConsoleW
+-GetConsoleMode=kernel32.GetConsoleMode
+-GetLastError=kernel32.GetLastError
+-GetCommandLineW=WINFUNCTYPE(LPWSTR)(('GetCommandLineW',windll.kernel32))
+-CommandLineToArgvW=WINFUNCTYPE(POINTER(LPWSTR),LPCWSTR,POINTER(c_int))(('CommandLineToArgvW',windll.shell32))
+-LocalFree=WINFUNCTYPE(ctypes.c_void_p,ctypes.c_void_p)(('LocalFree',windll.kernel32))
+-STDIN_HANDLE=GetStdHandle(-10)
+-STDOUT_HANDLE=GetStdHandle(-11)
+-STDERR_HANDLE=GetStdHandle(-12)
+-PyBUF_SIMPLE=0
+-PyBUF_WRITABLE=1
+-ERROR_SUCCESS=0
+-ERROR_NOT_ENOUGH_MEMORY=8
+-ERROR_OPERATION_ABORTED=995
+-STDIN_FILENO=0
+-STDOUT_FILENO=1
+-STDERR_FILENO=2
+-EOF=b'\x1a'
+-MAX_BYTES_WRITTEN=32767
+-class Py_buffer(ctypes.Structure):_fields_=[('buf',c_void_p),('obj',py_object),('len',c_ssize_t),('itemsize',c_ssize_t),('readonly',c_int),('ndim',c_int),('format',c_char_p),('shape',c_ssize_p),('strides',c_ssize_p),('suboffsets',c_ssize_p),('internal',c_void_p)]
+-if pythonapi is _A:get_buffer=_A
+-else:
+-	def get_buffer(obj,writable=_E):
+-		A=Py_buffer();B=PyBUF_WRITABLE if writable else PyBUF_SIMPLE;PyObject_GetBuffer(py_object(obj),byref(A),B)
+-		try:C=c_char*A.len;return C.from_address(A.buf)
+-		finally:PyBuffer_Release(byref(A))
+-class _WindowsConsoleRawIOBase(io.RawIOBase):
+-	def __init__(A,handle):A.handle=handle
+-	def isatty(A):io.RawIOBase.isatty(A);return _B
+-class _WindowsConsoleReader(_WindowsConsoleRawIOBase):
+-	def readable(A):return _B
+-	def readinto(D,b):
+-		A=len(b)
+-		if not A:return 0
+-		elif A%2:raise ValueError('cannot read odd number of bytes from UTF-16-LE encoded console')
+-		B=get_buffer(b,writable=_B);E=A//2;C=c_ulong();F=ReadConsoleW(HANDLE(D.handle),B,E,byref(C),_A)
+-		if GetLastError()==ERROR_OPERATION_ABORTED:time.sleep(0.1)
+-		if not F:raise OSError(f"Windows error: {GetLastError()}")
+-		if B[0]==EOF:return 0
+-		return 2*C.value
+-class _WindowsConsoleWriter(_WindowsConsoleRawIOBase):
+-	def writable(A):return _B
+-	@staticmethod
+-	def _get_error_message(errno):
+-		A=errno
+-		if A==ERROR_SUCCESS:return'ERROR_SUCCESS'
+-		elif A==ERROR_NOT_ENOUGH_MEMORY:return'ERROR_NOT_ENOUGH_MEMORY'
+-		return f"Windows error {A}"
+-	def write(A,b):
+-		B=len(b);E=get_buffer(b);F=min(B,MAX_BYTES_WRITTEN)//2;C=c_ulong();WriteConsoleW(HANDLE(A.handle),E,F,byref(C),_A);D=2*C.value
+-		if D==0 and B>0:raise OSError(A._get_error_message(GetLastError()))
+-		return D
+-class ConsoleStream:
+-	def __init__(A,text_stream,byte_stream):A._text_stream=text_stream;A.buffer=byte_stream
+-	@property
+-	def name(self):return self.buffer.name
+-	def write(A,x):
+-		if isinstance(x,str):return A._text_stream.write(x)
+-		try:A.flush()
+-		except Exception:pass
+-		return A.buffer.write(x)
+-	def writelines(A,lines):
+-		for B in lines:A.write(B)
+-	def __getattr__(A,name):return getattr(A._text_stream,name)
+-	def isatty(A):return A.buffer.isatty()
+-	def __repr__(A):return f"<ConsoleStream name={A.name!r} encoding={A.encoding!r}>"
+-class WindowsChunkedWriter:
+-	def __init__(A,wrapped):A.__wrapped=wrapped
+-	def __getattr__(A,name):return getattr(A.__wrapped,name)
+-	def write(D,text):
+-		B=len(text);A=0
+-		while A<B:C=min(B-A,MAX_BYTES_WRITTEN);D.__wrapped.write(text[A:A+C]);A+=C
+-def _get_text_stdin(buffer_stream):A=_NonClosingTextIOWrapper(io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)),_C,_D,line_buffering=_B);return ConsoleStream(A,buffer_stream)
+-def _get_text_stdout(buffer_stream):A=_NonClosingTextIOWrapper(io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)),_C,_D,line_buffering=_B);return ConsoleStream(A,buffer_stream)
+-def _get_text_stderr(buffer_stream):A=_NonClosingTextIOWrapper(io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)),_C,_D,line_buffering=_B);return ConsoleStream(A,buffer_stream)
+-_stream_factories={0:_get_text_stdin,1:_get_text_stdout,2:_get_text_stderr}
+-def _is_console(f):
+-	if not hasattr(f,'fileno'):return _E
+-	try:A=f.fileno()
+-	except OSError:return _E
+-	B=msvcrt.get_osfhandle(A);return bool(GetConsoleMode(B,byref(DWORD())))
+-def _get_windows_console_stream(f,encoding,errors):
+-	if get_buffer is not _A and encoding in{_C,_A}and errors in{_D,_A}and _is_console(f):
+-		A=_stream_factories.get(f.fileno())
+-		if A is not _A:
+-			f=getattr(f,'buffer',_A)
+-			if f is _A:return _A
+-			return A(f)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/core.py b/dynaconf/vendor/click/core.py
+deleted file mode 100644
+index fe475eb..0000000
+--- a/dynaconf/vendor/click/core.py
++++ /dev/null
+@@ -1,620 +0,0 @@
+-_I='default'
+-_H=' / '
+-_G='...'
+-_F='nargs'
+-_E='-'
+-_D='_'
+-_C=True
+-_B=False
+-_A=None
+-import errno,inspect,os,sys
+-from contextlib import contextmanager
+-from functools import update_wrapper
+-from itertools import repeat
+-from ._unicodefun import _verify_python_env
+-from .exceptions import Abort
+-from .exceptions import BadParameter
+-from .exceptions import ClickException
+-from .exceptions import Exit
+-from .exceptions import MissingParameter
+-from .exceptions import UsageError
+-from .formatting import HelpFormatter
+-from .formatting import join_options
+-from .globals import pop_context
+-from .globals import push_context
+-from .parser import OptionParser
+-from .parser import split_opt
+-from .termui import confirm
+-from .termui import prompt
+-from .termui import style
+-from .types import BOOL
+-from .types import convert_type
+-from .types import IntRange
+-from .utils import echo
+-from .utils import make_default_short_help
+-from .utils import make_str
+-from .utils import PacifyFlushWrapper
+-_missing=object()
+-SUBCOMMAND_METAVAR='COMMAND [ARGS]...'
+-SUBCOMMANDS_METAVAR='COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...'
+-DEPRECATED_HELP_NOTICE=' (DEPRECATED)'
+-DEPRECATED_INVOKE_NOTICE='DeprecationWarning: The command {name} is deprecated.'
+-def _maybe_show_deprecated_notice(cmd):
+-	if cmd.deprecated:echo(style(DEPRECATED_INVOKE_NOTICE.format(name=cmd.name),fg='red'),err=_C)
+-def fast_exit(code):sys.stdout.flush();sys.stderr.flush();os._exit(code)
+-def _bashcomplete(cmd,prog_name,complete_var=_A):
+-	B=prog_name;A=complete_var
+-	if A is _A:A=f"_{B}_COMPLETE".replace(_E,_D).upper()
+-	C=os.environ.get(A)
+-	if not C:return
+-	from ._bashcomplete import bashcomplete as D
+-	if D(cmd,B,A,C):fast_exit(1)
+-def _check_multicommand(base_command,cmd_name,cmd,register=_B):
+-	B=cmd_name;A=base_command
+-	if not A.chain or not isinstance(cmd,MultiCommand):return
+-	if register:C='It is not possible to add multi commands as children to another multi command that is in chain mode.'
+-	else:C='Found a multi command as subcommand to a multi command that is in chain mode. This is not supported.'
+-	raise RuntimeError(f"{C}. Command {A.name!r} is set to chain and {B!r} was added as a subcommand but it in itself is a multi command. ({B!r} is a {type(cmd).__name__} within a chained {type(A).__name__} named {A.name!r}).")
+-def batch(iterable,batch_size):return list(zip(*repeat(iter(iterable),batch_size)))
+-@contextmanager
+-def augment_usage_errors(ctx,param=_A):
+-	B=param
+-	try:yield
+-	except BadParameter as A:
+-		if A.ctx is _A:A.ctx=ctx
+-		if B is not _A and A.param is _A:A.param=B
+-		raise
+-	except UsageError as A:
+-		if A.ctx is _A:A.ctx=ctx
+-		raise
+-def iter_params_for_processing(invocation_order,declaration_order):
+-	def A(item):
+-		try:A=invocation_order.index(item)
+-		except ValueError:A=float('inf')
+-		return not item.is_eager,A
+-	return sorted(declaration_order,key=A)
+-class ParameterSource:
+-	COMMANDLINE='COMMANDLINE';ENVIRONMENT='ENVIRONMENT';DEFAULT='DEFAULT';DEFAULT_MAP='DEFAULT_MAP';VALUES={COMMANDLINE,ENVIRONMENT,DEFAULT,DEFAULT_MAP}
+-	@classmethod
+-	def validate(A,value):
+-		B=value
+-		if B not in A.VALUES:raise ValueError(f"Invalid ParameterSource value: {B!r}. Valid values are: {','.join(A.VALUES)}")
+-class Context:
+-	def __init__(A,command,parent=_A,info_name=_A,obj=_A,auto_envvar_prefix=_A,default_map=_A,terminal_width=_A,max_content_width=_A,resilient_parsing=_B,allow_extra_args=_A,allow_interspersed_args=_A,ignore_unknown_options=_A,help_option_names=_A,token_normalize_func=_A,color=_A,show_default=_A):
+-		O=info_name;N=color;M=token_normalize_func;L=ignore_unknown_options;K=allow_interspersed_args;J=allow_extra_args;I=max_content_width;H=terminal_width;G=default_map;F=obj;E=help_option_names;D=command;C=auto_envvar_prefix;B=parent;A.parent=B;A.command=D;A.info_name=O;A.params={};A.args=[];A.protected_args=[]
+-		if F is _A and B is not _A:F=B.obj
+-		A.obj=F;A._meta=getattr(B,'meta',{})
+-		if G is _A and B is not _A and B.default_map is not _A:G=B.default_map.get(O)
+-		A.default_map=G;A.invoked_subcommand=_A
+-		if H is _A and B is not _A:H=B.terminal_width
+-		A.terminal_width=H
+-		if I is _A and B is not _A:I=B.max_content_width
+-		A.max_content_width=I
+-		if J is _A:J=D.allow_extra_args
+-		A.allow_extra_args=J
+-		if K is _A:K=D.allow_interspersed_args
+-		A.allow_interspersed_args=K
+-		if L is _A:L=D.ignore_unknown_options
+-		A.ignore_unknown_options=L
+-		if E is _A:
+-			if B is not _A:E=B.help_option_names
+-			else:E=['--help']
+-		A.help_option_names=E
+-		if M is _A and B is not _A:M=B.token_normalize_func
+-		A.token_normalize_func=M;A.resilient_parsing=resilient_parsing
+-		if C is _A:
+-			if B is not _A and B.auto_envvar_prefix is not _A and A.info_name is not _A:C=f"{B.auto_envvar_prefix}_{A.info_name.upper()}"
+-		else:C=C.upper()
+-		if C is not _A:C=C.replace(_E,_D)
+-		A.auto_envvar_prefix=C
+-		if N is _A and B is not _A:N=B.color
+-		A.color=N;A.show_default=show_default;A._close_callbacks=[];A._depth=0;A._source_by_paramname={}
+-	def __enter__(A):A._depth+=1;push_context(A);return A
+-	def __exit__(A,exc_type,exc_value,tb):
+-		A._depth-=1
+-		if A._depth==0:A.close()
+-		pop_context()
+-	@contextmanager
+-	def scope(self,cleanup=_C):
+-		B=cleanup;A=self
+-		if not B:A._depth+=1
+-		try:
+-			with A as C:yield C
+-		finally:
+-			if not B:A._depth-=1
+-	@property
+-	def meta(self):return self._meta
+-	def make_formatter(A):return HelpFormatter(width=A.terminal_width,max_width=A.max_content_width)
+-	def call_on_close(A,f):A._close_callbacks.append(f);return f
+-	def close(A):
+-		for B in A._close_callbacks:B()
+-		A._close_callbacks=[]
+-	@property
+-	def command_path(self):
+-		A=self;B=''
+-		if A.info_name is not _A:B=A.info_name
+-		if A.parent is not _A:B=f"{A.parent.command_path} {B}"
+-		return B.lstrip()
+-	def find_root(B):
+-		A=B
+-		while A.parent is not _A:A=A.parent
+-		return A
+-	def find_object(B,object_type):
+-		A=B
+-		while A is not _A:
+-			if isinstance(A.obj,object_type):return A.obj
+-			A=A.parent
+-	def ensure_object(B,object_type):
+-		C=object_type;A=B.find_object(C)
+-		if A is _A:B.obj=A=C()
+-		return A
+-	def lookup_default(B,name):
+-		if B.default_map is not _A:
+-			A=B.default_map.get(name)
+-			if callable(A):A=A()
+-			return A
+-	def fail(A,message):raise UsageError(message,A)
+-	def abort(A):raise Abort()
+-	def exit(A,code=0):raise Exit(code)
+-	def get_usage(A):return A.command.get_usage(A)
+-	def get_help(A):return A.command.get_help(A)
+-	def invoke(*B,**E):
+-		F,A=B[:2];G=F
+-		if isinstance(A,Command):
+-			C=A;A=C.callback;G=Context(C,info_name=C.name,parent=F)
+-			if A is _A:raise TypeError('The given command does not have a callback that can be invoked.')
+-			for D in C.params:
+-				if D.name not in E and D.expose_value:E[D.name]=D.get_default(G)
+-		B=B[2:]
+-		with augment_usage_errors(F):
+-			with G:return A(*B,**E)
+-	def forward(*E,**A):
+-		B,D=E[:2]
+-		if not isinstance(D,Command):raise TypeError('Callback is not a command.')
+-		for C in B.params:
+-			if C not in A:A[C]=B.params[C]
+-		return B.invoke(D,**A)
+-	def set_parameter_source(B,name,source):A=source;ParameterSource.validate(A);B._source_by_paramname[name]=A
+-	def get_parameter_source(A,name):return A._source_by_paramname[name]
+-class BaseCommand:
+-	allow_extra_args=_B;allow_interspersed_args=_C;ignore_unknown_options=_B
+-	def __init__(B,name,context_settings=_A):
+-		A=context_settings;B.name=name
+-		if A is _A:A={}
+-		B.context_settings=A
+-	def __repr__(A):return f"<{A.__class__.__name__} {A.name}>"
+-	def get_usage(A,ctx):raise NotImplementedError('Base commands cannot get usage')
+-	def get_help(A,ctx):raise NotImplementedError('Base commands cannot get help')
+-	def make_context(A,info_name,args,parent=_A,**B):
+-		for (D,E) in A.context_settings.items():
+-			if D not in B:B[D]=E
+-		C=Context(A,info_name=info_name,parent=parent,**B)
+-		with C.scope(cleanup=_B):A.parse_args(C,args)
+-		return C
+-	def parse_args(A,ctx,args):raise NotImplementedError('Base commands do not know how to parse arguments.')
+-	def invoke(A,ctx):raise NotImplementedError('Base commands are not invokable by default')
+-	def main(E,args=_A,prog_name=_A,complete_var=_A,standalone_mode=_C,**G):
+-		D=standalone_mode;C=prog_name;B=args;_verify_python_env()
+-		if B is _A:B=sys.argv[1:]
+-		else:B=list(B)
+-		if C is _A:C=make_str(os.path.basename(sys.argv[0]if sys.argv else __file__))
+-		_bashcomplete(E,C,complete_var)
+-		try:
+-			try:
+-				with E.make_context(C,B,**G)as F:
+-					H=E.invoke(F)
+-					if not D:return H
+-					F.exit()
+-			except (EOFError,KeyboardInterrupt):echo(file=sys.stderr);raise Abort()
+-			except ClickException as A:
+-				if not D:raise
+-				A.show();sys.exit(A.exit_code)
+-			except OSError as A:
+-				if A.errno==errno.EPIPE:sys.stdout=PacifyFlushWrapper(sys.stdout);sys.stderr=PacifyFlushWrapper(sys.stderr);sys.exit(1)
+-				else:raise
+-		except Exit as A:
+-			if D:sys.exit(A.exit_code)
+-			else:return A.exit_code
+-		except Abort:
+-			if not D:raise
+-			echo('Aborted!',file=sys.stderr);sys.exit(1)
+-	def __call__(A,*B,**C):return A.main(*B,**C)
+-class Command(BaseCommand):
+-	def __init__(A,name,context_settings=_A,callback=_A,params=_A,help=_A,epilog=_A,short_help=_A,options_metavar='[OPTIONS]',add_help_option=_C,no_args_is_help=_B,hidden=_B,deprecated=_B):
+-		B='\x0c';BaseCommand.__init__(A,name,context_settings);A.callback=callback;A.params=params or[]
+-		if help and B in help:help=help.split(B,1)[0]
+-		A.help=help;A.epilog=epilog;A.options_metavar=options_metavar;A.short_help=short_help;A.add_help_option=add_help_option;A.no_args_is_help=no_args_is_help;A.hidden=hidden;A.deprecated=deprecated
+-	def __repr__(A):return f"<{A.__class__.__name__} {A.name}>"
+-	def get_usage(B,ctx):A=ctx.make_formatter();B.format_usage(ctx,A);return A.getvalue().rstrip('\n')
+-	def get_params(B,ctx):
+-		A=B.params;C=B.get_help_option(ctx)
+-		if C is not _A:A=A+[C]
+-		return A
+-	def format_usage(A,ctx,formatter):B=A.collect_usage_pieces(ctx);formatter.write_usage(ctx.command_path,' '.join(B))
+-	def collect_usage_pieces(A,ctx):
+-		B=[A.options_metavar]
+-		for C in A.get_params(ctx):B.extend(C.get_usage_pieces(ctx))
+-		return B
+-	def get_help_option_names(C,ctx):
+-		A=set(ctx.help_option_names)
+-		for B in C.params:A.difference_update(B.opts);A.difference_update(B.secondary_opts)
+-		return A
+-	def get_help_option(A,ctx):
+-		B=A.get_help_option_names(ctx)
+-		if not B or not A.add_help_option:return
+-		def C(ctx,param,value):
+-			A=ctx
+-			if value and not A.resilient_parsing:echo(A.get_help(),color=A.color);A.exit()
+-		return Option(B,is_flag=_C,is_eager=_C,expose_value=_B,callback=C,help='Show this message and exit.')
+-	def make_parser(C,ctx):
+-		A=ctx;B=OptionParser(A)
+-		for D in C.get_params(A):D.add_to_parser(B,A)
+-		return B
+-	def get_help(B,ctx):A=ctx.make_formatter();B.format_help(ctx,A);return A.getvalue().rstrip('\n')
+-	def get_short_help_str(A,limit=45):return A.short_help or A.help and make_default_short_help(A.help,limit)or''
+-	def format_help(A,ctx,formatter):C=formatter;B=ctx;A.format_usage(B,C);A.format_help_text(B,C);A.format_options(B,C);A.format_epilog(B,C)
+-	def format_help_text(B,ctx,formatter):
+-		A=formatter
+-		if B.help:
+-			A.write_paragraph()
+-			with A.indentation():
+-				C=B.help
+-				if B.deprecated:C+=DEPRECATED_HELP_NOTICE
+-				A.write_text(C)
+-		elif B.deprecated:
+-			A.write_paragraph()
+-			with A.indentation():A.write_text(DEPRECATED_HELP_NOTICE)
+-	def format_options(D,ctx,formatter):
+-		B=formatter;A=[]
+-		for E in D.get_params(ctx):
+-			C=E.get_help_record(ctx)
+-			if C is not _A:A.append(C)
+-		if A:
+-			with B.section('Options'):B.write_dl(A)
+-	def format_epilog(B,ctx,formatter):
+-		A=formatter
+-		if B.epilog:
+-			A.write_paragraph()
+-			with A.indentation():A.write_text(B.epilog)
+-	def parse_args(C,ctx,args):
+-		B=args;A=ctx
+-		if not B and C.no_args_is_help and not A.resilient_parsing:echo(A.get_help(),color=A.color);A.exit()
+-		D=C.make_parser(A);E,B,F=D.parse_args(args=B)
+-		for G in iter_params_for_processing(F,C.get_params(A)):H,B=G.handle_parse_result(A,E,B)
+-		if B and not A.allow_extra_args and not A.resilient_parsing:A.fail(f"Got unexpected extra argument{'s'if len(B)!=1 else''} ({' '.join(map(make_str,B))})")
+-		A.args=B;return B
+-	def invoke(A,ctx):
+-		_maybe_show_deprecated_notice(A)
+-		if A.callback is not _A:return ctx.invoke(A.callback,**ctx.params)
+-class MultiCommand(Command):
+-	allow_extra_args=_C;allow_interspersed_args=_B
+-	def __init__(A,name=_A,invoke_without_command=_B,no_args_is_help=_A,subcommand_metavar=_A,chain=_B,result_callback=_A,**G):
+-		E=chain;D=invoke_without_command;C=no_args_is_help;B=subcommand_metavar;Command.__init__(A,name,**G)
+-		if C is _A:C=not D
+-		A.no_args_is_help=C;A.invoke_without_command=D
+-		if B is _A:
+-			if E:B=SUBCOMMANDS_METAVAR
+-			else:B=SUBCOMMAND_METAVAR
+-		A.subcommand_metavar=B;A.chain=E;A.result_callback=result_callback
+-		if A.chain:
+-			for F in A.params:
+-				if isinstance(F,Argument)and not F.required:raise RuntimeError('Multi commands in chain mode cannot have optional arguments.')
+-	def collect_usage_pieces(A,ctx):B=Command.collect_usage_pieces(A,ctx);B.append(A.subcommand_metavar);return B
+-	def format_options(A,ctx,formatter):B=formatter;Command.format_options(A,ctx,B);A.format_commands(ctx,B)
+-	def resultcallback(A,replace=_B):
+-		def B(f):
+-			B=A.result_callback
+-			if B is _A or replace:A.result_callback=f;return f
+-			def C(__value,*A,**C):return f(B(__value,*A,**C),*A,**C)
+-			A.result_callback=D=update_wrapper(C,f);return D
+-		return B
+-	def format_commands(F,ctx,formatter):
+-		D=formatter;B=[]
+-		for C in F.list_commands(ctx):
+-			A=F.get_command(ctx,C)
+-			if A is _A:continue
+-			if A.hidden:continue
+-			B.append((C,A))
+-		if len(B):
+-			G=D.width-6-max((len(A[0])for A in B));E=[]
+-			for (C,A) in B:help=A.get_short_help_str(G);E.append((C,help))
+-			if E:
+-				with D.section('Commands'):D.write_dl(E)
+-	def parse_args(C,ctx,args):
+-		A=ctx
+-		if not args and C.no_args_is_help and not A.resilient_parsing:echo(A.get_help(),color=A.color);A.exit()
+-		B=Command.parse_args(C,A,args)
+-		if C.chain:A.protected_args=B;A.args=[]
+-		elif B:A.protected_args,A.args=B[:1],B[1:]
+-		return A.args
+-	def invoke(B,ctx):
+-		A=ctx
+-		def F(value):
+-			C=value
+-			if B.result_callback is not _A:C=A.invoke(B.result_callback,C,**A.params)
+-			return C
+-		if not A.protected_args:
+-			if B.invoke_without_command:
+-				if not B.chain:return Command.invoke(B,A)
+-				with A:Command.invoke(B,A);return F([])
+-			A.fail('Missing command.')
+-		D=A.protected_args+A.args;A.args=[];A.protected_args=[]
+-		if not B.chain:
+-			with A:
+-				E,G,D=B.resolve_command(A,D);A.invoked_subcommand=E;Command.invoke(B,A);C=G.make_context(E,D,parent=A)
+-				with C:return F(C.command.invoke(C))
+-		with A:
+-			A.invoked_subcommand='*'if D else _A;Command.invoke(B,A);H=[]
+-			while D:E,G,D=B.resolve_command(A,D);C=G.make_context(E,D,parent=A,allow_extra_args=_C,allow_interspersed_args=_B);H.append(C);D,C.args=C.args,[]
+-			I=[]
+-			for C in H:
+-				with C:I.append(C.command.invoke(C))
+-			return F(I)
+-	def resolve_command(D,ctx,args):
+-		A=ctx;B=make_str(args[0]);E=B;C=D.get_command(A,B)
+-		if C is _A and A.token_normalize_func is not _A:B=A.token_normalize_func(B);C=D.get_command(A,B)
+-		if C is _A and not A.resilient_parsing:
+-			if split_opt(B)[0]:D.parse_args(A,A.args)
+-			A.fail(f"No such command '{E}'.")
+-		return B,C,args[1:]
+-	def get_command(A,ctx,cmd_name):raise NotImplementedError()
+-	def list_commands(A,ctx):return[]
+-class Group(MultiCommand):
+-	def __init__(A,name=_A,commands=_A,**B):MultiCommand.__init__(A,name,**B);A.commands=commands or{}
+-	def add_command(C,cmd,name=_A):
+-		B=cmd;A=name;A=A or B.name
+-		if A is _A:raise TypeError('Command has no name.')
+-		_check_multicommand(C,A,B,register=_C);C.commands[A]=B
+-	def command(B,*C,**D):
+-		from .decorators import command as E
+-		def A(f):A=E(*C,**D)(f);B.add_command(A);return A
+-		return A
+-	def group(B,*C,**D):
+-		from .decorators import group
+-		def A(f):A=group(*C,**D)(f);B.add_command(A);return A
+-		return A
+-	def get_command(A,ctx,cmd_name):return A.commands.get(cmd_name)
+-	def list_commands(A,ctx):return sorted(A.commands)
+-class CommandCollection(MultiCommand):
+-	def __init__(A,name=_A,sources=_A,**B):MultiCommand.__init__(A,name,**B);A.sources=sources or[]
+-	def add_source(A,multi_cmd):A.sources.append(multi_cmd)
+-	def get_command(A,ctx,cmd_name):
+-		C=cmd_name
+-		for D in A.sources:
+-			B=D.get_command(ctx,C)
+-			if B is not _A:
+-				if A.chain:_check_multicommand(A,C,B)
+-				return B
+-	def list_commands(B,ctx):
+-		A=set()
+-		for C in B.sources:A.update(C.list_commands(ctx))
+-		return sorted(A)
+-class Parameter:
+-	param_type_name='parameter'
+-	def __init__(A,param_decls=_A,type=_A,required=_B,default=_A,callback=_A,nargs=_A,metavar=_A,expose_value=_C,is_eager=_B,envvar=_A,autocompletion=_A):
+-		D=expose_value;C=default;B=nargs;A.name,A.opts,A.secondary_opts=A._parse_decls(param_decls or(),D);A.type=convert_type(type,C)
+-		if B is _A:
+-			if A.type.is_composite:B=A.type.arity
+-			else:B=1
+-		A.required=required;A.callback=callback;A.nargs=B;A.multiple=_B;A.expose_value=D;A.default=C;A.is_eager=is_eager;A.metavar=metavar;A.envvar=envvar;A.autocompletion=autocompletion
+-	def __repr__(A):return f"<{A.__class__.__name__} {A.name}>"
+-	@property
+-	def human_readable_name(self):return self.name
+-	def make_metavar(A):
+-		if A.metavar is not _A:return A.metavar
+-		B=A.type.get_metavar(A)
+-		if B is _A:B=A.type.name.upper()
+-		if A.nargs!=1:B+=_G
+-		return B
+-	def get_default(A,ctx):
+-		if callable(A.default):B=A.default()
+-		else:B=A.default
+-		return A.type_cast_value(ctx,B)
+-	def add_to_parser(A,parser,ctx):0
+-	def consume_value(B,ctx,opts):
+-		C=ctx;A=opts.get(B.name);D=ParameterSource.COMMANDLINE
+-		if A is _A:A=B.value_from_envvar(C);D=ParameterSource.ENVIRONMENT
+-		if A is _A:A=C.lookup_default(B.name);D=ParameterSource.DEFAULT_MAP
+-		if A is not _A:C.set_parameter_source(B.name,D)
+-		return A
+-	def type_cast_value(A,ctx,value):
+-		C=value;B=ctx
+-		if A.type.is_composite:
+-			if A.nargs<=1:raise TypeError(f"Attempted to invoke composite type but nargs has been set to {A.nargs}. This is not supported; nargs needs to be set to a fixed value > 1.")
+-			if A.multiple:return tuple((A.type(D or(),A,B)for D in C or()))
+-			return A.type(C or(),A,B)
+-		def D(value,level):
+-			E=level;C=value
+-			if E==0:return A.type(C,A,B)
+-			return tuple((D(A,E-1)for A in C or()))
+-		return D(C,(A.nargs!=1)+bool(A.multiple))
+-	def process_value(B,ctx,value):
+-		A=value
+-		if A is not _A:return B.type_cast_value(ctx,A)
+-	def value_is_missing(A,value):
+-		B=value
+-		if B is _A:return _C
+-		if(A.nargs!=1 or A.multiple)and B==():return _C
+-		return _B
+-	def full_process_value(B,ctx,value):
+-		C=ctx;A=value;A=B.process_value(C,A)
+-		if A is _A and not C.resilient_parsing:
+-			A=B.get_default(C)
+-			if A is not _A:C.set_parameter_source(B.name,ParameterSource.DEFAULT)
+-		if B.required and B.value_is_missing(A):raise MissingParameter(ctx=C,param=B)
+-		return A
+-	def resolve_envvar_value(B,ctx):
+-		if B.envvar is _A:return
+-		if isinstance(B.envvar,(tuple,list)):
+-			for C in B.envvar:
+-				A=os.environ.get(C)
+-				if A is not _A:return A
+-		else:
+-			A=os.environ.get(B.envvar)
+-			if A!='':return A
+-	def value_from_envvar(B,ctx):
+-		A=B.resolve_envvar_value(ctx)
+-		if A is not _A and B.nargs!=1:A=B.type.split_envvar_value(A)
+-		return A
+-	def handle_parse_result(A,ctx,opts,args):
+-		B=ctx
+-		with augment_usage_errors(B,param=A):
+-			C=A.consume_value(B,opts)
+-			try:C=A.full_process_value(B,C)
+-			except Exception:
+-				if not B.resilient_parsing:raise
+-				C=_A
+-			if A.callback is not _A:
+-				try:C=A.callback(B,A,C)
+-				except Exception:
+-					if not B.resilient_parsing:raise
+-		if A.expose_value:B.params[A.name]=C
+-		return C,args
+-	def get_help_record(A,ctx):0
+-	def get_usage_pieces(A,ctx):return[]
+-	def get_error_hint(A,ctx):B=A.opts or[A.human_readable_name];return _H.join((repr(A)for A in B))
+-class Option(Parameter):
+-	param_type_name='option'
+-	def __init__(A,param_decls=_A,show_default=_B,prompt=_B,confirmation_prompt=_B,hide_input=_B,is_flag=_A,flag_value=_A,multiple=_B,count=_B,allow_from_autoenv=_C,type=_A,help=_A,hidden=_B,show_choices=_C,show_envvar=_B,**G):
+-		F=count;D=prompt;C=flag_value;B=is_flag;H=G.get(_I,_missing)is _missing;Parameter.__init__(A,param_decls,type=type,**G)
+-		if D is _C:E=A.name.replace(_D,' ').capitalize()
+-		elif D is _B:E=_A
+-		else:E=D
+-		A.prompt=E;A.confirmation_prompt=confirmation_prompt;A.hide_input=hide_input;A.hidden=hidden
+-		if B is _A:
+-			if C is not _A:B=_C
+-			else:B=bool(A.secondary_opts)
+-		if B and H:A.default=_B
+-		if C is _A:C=not A.default
+-		A.is_flag=B;A.flag_value=C
+-		if A.is_flag and isinstance(A.flag_value,bool)and type in[_A,bool]:A.type=BOOL;A.is_bool_flag=_C
+-		else:A.is_bool_flag=_B
+-		A.count=F
+-		if F:
+-			if type is _A:A.type=IntRange(min=0)
+-			if H:A.default=0
+-		A.multiple=multiple;A.allow_from_autoenv=allow_from_autoenv;A.help=help;A.show_default=show_default;A.show_choices=show_choices;A.show_envvar=show_envvar
+-		if __debug__:
+-			if A.nargs<0:raise TypeError('Options cannot have nargs < 0')
+-			if A.prompt and A.is_flag and not A.is_bool_flag:raise TypeError('Cannot prompt for flags that are not bools.')
+-			if not A.is_bool_flag and A.secondary_opts:raise TypeError('Got secondary option for non boolean flag.')
+-			if A.is_bool_flag and A.hide_input and A.prompt is not _A:raise TypeError('Hidden input does not work with boolean flag prompts.')
+-			if A.count:
+-				if A.multiple:raise TypeError('Options cannot be multiple and count at the same time.')
+-				elif A.is_flag:raise TypeError('Options cannot be count and flags at the same time.')
+-	def _parse_decls(J,decls,expose_value):
+-		I='/';C=[];F=[];A=_A;D=[]
+-		for B in decls:
+-			if B.isidentifier():
+-				if A is not _A:raise TypeError('Name defined twice')
+-				A=B
+-			else:
+-				H=';'if B[:1]==I else I
+-				if H in B:
+-					E,G=B.split(H,1);E=E.rstrip()
+-					if E:D.append(split_opt(E));C.append(E)
+-					G=G.lstrip()
+-					if G:F.append(G.lstrip())
+-				else:D.append(split_opt(B));C.append(B)
+-		if A is _A and D:
+-			D.sort(key=lambda x:-len(x[0]));A=D[0][1].replace(_E,_D).lower()
+-			if not A.isidentifier():A=_A
+-		if A is _A:
+-			if not expose_value:return _A,C,F
+-			raise TypeError('Could not determine name for option')
+-		if not C and not F:raise TypeError(f"No options defined but a name was passed ({A}). Did you mean to declare an argument instead of an option?")
+-		return A,C,F
+-	def add_to_parser(A,parser,ctx):
+-		C=parser;B={'dest':A.name,_F:A.nargs,'obj':A}
+-		if A.multiple:D='append'
+-		elif A.count:D='count'
+-		else:D='store'
+-		if A.is_flag:
+-			B.pop(_F,_A);E=f"{D}_const"
+-			if A.is_bool_flag and A.secondary_opts:C.add_option(A.opts,action=E,const=_C,**B);C.add_option(A.secondary_opts,action=E,const=_B,**B)
+-			else:C.add_option(A.opts,action=E,const=A.flag_value,**B)
+-		else:B['action']=D;C.add_option(A.opts,**B)
+-	def get_help_record(A,ctx):
+-		K=', ';E=ctx
+-		if A.hidden:return
+-		F=[]
+-		def G(opts):
+-			B,C=join_options(opts)
+-			if C:F[:]=[_C]
+-			if not A.is_flag and not A.count:B+=f" {A.make_metavar()}"
+-			return B
+-		H=[G(A.opts)]
+-		if A.secondary_opts:H.append(G(A.secondary_opts))
+-		help=A.help or'';C=[]
+-		if A.show_envvar:
+-			B=A.envvar
+-			if B is _A:
+-				if A.allow_from_autoenv and E.auto_envvar_prefix is not _A:B=f"{E.auto_envvar_prefix}_{A.name.upper()}"
+-			if B is not _A:J=K.join((str(A)for A in B))if isinstance(B,(list,tuple))else B;C.append(f"env var: {J}")
+-		if A.default is not _A and(A.show_default or E.show_default):
+-			if isinstance(A.show_default,str):D=f"({A.show_default})"
+-			elif isinstance(A.default,(list,tuple)):D=K.join((str(B)for B in A.default))
+-			elif inspect.isfunction(A.default):D='(dynamic)'
+-			else:D=A.default
+-			C.append(f"default: {D}")
+-		if A.required:C.append('required')
+-		if C:I=';'.join(C);help=f"{help}  [{I}]"if help else f"[{I}]"
+-		return ('; 'if F else _H).join(H),help
+-	def get_default(A,ctx):
+-		if A.is_flag and not A.is_bool_flag:
+-			for B in ctx.command.params:
+-				if B.name==A.name and B.default:return B.flag_value
+-			return _A
+-		return Parameter.get_default(A,ctx)
+-	def prompt_for_value(A,ctx):
+-		B=A.get_default(ctx)
+-		if A.is_bool_flag:return confirm(A.prompt,B)
+-		return prompt(A.prompt,default=B,type=A.type,hide_input=A.hide_input,show_choices=A.show_choices,confirmation_prompt=A.confirmation_prompt,value_proc=lambda x:A.process_value(ctx,x))
+-	def resolve_envvar_value(A,ctx):
+-		B=ctx;C=Parameter.resolve_envvar_value(A,B)
+-		if C is not _A:return C
+-		if A.allow_from_autoenv and B.auto_envvar_prefix is not _A:D=f"{B.auto_envvar_prefix}_{A.name.upper()}";return os.environ.get(D)
+-	def value_from_envvar(A,ctx):
+-		B=A.resolve_envvar_value(ctx)
+-		if B is _A:return _A
+-		C=(A.nargs!=1)+bool(A.multiple)
+-		if C>0 and B is not _A:
+-			B=A.type.split_envvar_value(B)
+-			if A.multiple and A.nargs!=1:B=batch(B,A.nargs)
+-		return B
+-	def full_process_value(A,ctx,value):
+-		C=value;B=ctx
+-		if C is _A and A.prompt is not _A and not B.resilient_parsing:return A.prompt_for_value(B)
+-		return Parameter.full_process_value(A,B,C)
+-class Argument(Parameter):
+-	param_type_name='argument'
+-	def __init__(B,param_decls,required=_A,**C):
+-		A=required
+-		if A is _A:
+-			if C.get(_I)is not _A:A=_B
+-			else:A=C.get(_F,1)>0
+-		Parameter.__init__(B,param_decls,required=A,**C)
+-		if B.default is not _A and B.nargs<0:raise TypeError('nargs=-1 in combination with a default value is not supported.')
+-	@property
+-	def human_readable_name(self):
+-		A=self
+-		if A.metavar is not _A:return A.metavar
+-		return A.name.upper()
+-	def make_metavar(A):
+-		if A.metavar is not _A:return A.metavar
+-		B=A.type.get_metavar(A)
+-		if not B:B=A.name.upper()
+-		if not A.required:B=f"[{B}]"
+-		if A.nargs!=1:B+=_G
+-		return B
+-	def _parse_decls(D,decls,expose_value):
+-		A=decls
+-		if not A:
+-			if not expose_value:return _A,[],[]
+-			raise TypeError('Could not determine name for argument')
+-		if len(A)==1:B=C=A[0];B=B.replace(_E,_D).lower()
+-		else:raise TypeError(f"Arguments take exactly one parameter declaration, got {len(A)}.")
+-		return B,[C],[]
+-	def get_usage_pieces(A,ctx):return[A.make_metavar()]
+-	def get_error_hint(A,ctx):return repr(A.make_metavar())
+-	def add_to_parser(A,parser,ctx):parser.add_argument(dest=A.name,nargs=A.nargs,obj=A)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/decorators.py b/dynaconf/vendor/click/decorators.py
+deleted file mode 100644
+index 888b3e0..0000000
+--- a/dynaconf/vendor/click/decorators.py
++++ /dev/null
+@@ -1,115 +0,0 @@
+-_J='is_eager'
+-_I='prompt'
+-_H='expose_value'
+-_G='callback'
+-_F='is_flag'
+-_E='cls'
+-_D=False
+-_C=True
+-_B='help'
+-_A=None
+-import inspect,sys
+-from functools import update_wrapper
+-from .core import Argument
+-from .core import Command
+-from .core import Group
+-from .core import Option
+-from .globals import get_current_context
+-from .utils import echo
+-def pass_context(f):
+-	'Marks a callback as wanting to receive the current context\n    object as first argument.\n    '
+-	def A(*A,**B):return f(get_current_context(),*A,**B)
+-	return update_wrapper(A,f)
+-def pass_obj(f):
+-	'Similar to :func:`pass_context`, but only pass the object on the\n    context onwards (:attr:`Context.obj`).  This is useful if that object\n    represents the state of a nested system.\n    '
+-	def A(*A,**B):return f(get_current_context().obj,*A,**B)
+-	return update_wrapper(A,f)
+-def make_pass_decorator(object_type,ensure=_D):
+-	"Given an object type this creates a decorator that will work\n    similar to :func:`pass_obj` but instead of passing the object of the\n    current context, it will find the innermost context of type\n    :func:`object_type`.\n\n    This generates a decorator that works roughly like this::\n\n        from functools import update_wrapper\n\n        def decorator(f):\n            @pass_context\n            def new_func(ctx, *args, **kwargs):\n                obj = ctx.find_object(object_type)\n                return ctx.invoke(f, obj, *args, **kwargs)\n            return update_wrapper(new_func, f)\n        return decorator\n\n    :param object_type: the type of the object to pass.\n    :param ensure: if set to `True`, a new object will be created and\n                   remembered on the context if it's not there yet.\n    ";A=object_type
+-	def B(f):
+-		def B(*D,**E):
+-			B=get_current_context()
+-			if ensure:C=B.ensure_object(A)
+-			else:C=B.find_object(A)
+-			if C is _A:raise RuntimeError(f"Managed to invoke callback without a context object of type {A.__name__!r} existing.")
+-			return B.invoke(f,C,*D,**E)
+-		return update_wrapper(B,f)
+-	return B
+-def _make_command(f,name,attrs,cls):
+-	A=attrs
+-	if isinstance(f,Command):raise TypeError('Attempted to convert a callback into a command twice.')
+-	try:B=f.__click_params__;B.reverse();del f.__click_params__
+-	except AttributeError:B=[]
+-	help=A.get(_B)
+-	if help is _A:
+-		help=inspect.getdoc(f)
+-		if isinstance(help,bytes):help=help.decode('utf-8')
+-	else:help=inspect.cleandoc(help)
+-	A[_B]=help;return cls(name=name or f.__name__.lower().replace('_','-'),callback=f,params=B,**A)
+-def command(name=_A,cls=_A,**C):
+-	'Creates a new :class:`Command` and uses the decorated function as\n    callback.  This will also automatically attach all decorated\n    :func:`option`\\s and :func:`argument`\\s as parameters to the command.\n\n    The name of the command defaults to the name of the function with\n    underscores replaced by dashes.  If you want to change that, you can\n    pass the intended name as the first argument.\n\n    All keyword arguments are forwarded to the underlying command class.\n\n    Once decorated the function turns into a :class:`Command` instance\n    that can be invoked as a command line utility or be attached to a\n    command :class:`Group`.\n\n    :param name: the name of the command.  This defaults to the function\n                 name with underscores replaced by dashes.\n    :param cls: the command class to instantiate.  This defaults to\n                :class:`Command`.\n    ';A=cls
+-	if A is _A:A=Command
+-	def B(f):B=_make_command(f,name,C,A);B.__doc__=f.__doc__;return B
+-	return B
+-def group(name=_A,**A):'Creates a new :class:`Group` with a function as callback.  This\n    works otherwise the same as :func:`command` just that the `cls`\n    parameter is set to :class:`Group`.\n    ';A.setdefault(_E,Group);return command(name,**A)
+-def _param_memo(f,param):
+-	A=param
+-	if isinstance(f,Command):f.params.append(A)
+-	else:
+-		if not hasattr(f,'__click_params__'):f.__click_params__=[]
+-		f.__click_params__.append(A)
+-def argument(*B,**A):
+-	'Attaches an argument to the command.  All positional arguments are\n    passed as parameter declarations to :class:`Argument`; all keyword\n    arguments are forwarded unchanged (except ``cls``).\n    This is equivalent to creating an :class:`Argument` instance manually\n    and attaching it to the :attr:`Command.params` list.\n\n    :param cls: the argument class to instantiate.  This defaults to\n                :class:`Argument`.\n    '
+-	def C(f):C=A.pop(_E,Argument);_param_memo(f,C(B,**A));return f
+-	return C
+-def option(*B,**C):
+-	'Attaches an option to the command.  All positional arguments are\n    passed as parameter declarations to :class:`Option`; all keyword\n    arguments are forwarded unchanged (except ``cls``).\n    This is equivalent to creating an :class:`Option` instance manually\n    and attaching it to the :attr:`Command.params` list.\n\n    :param cls: the option class to instantiate.  This defaults to\n                :class:`Option`.\n    '
+-	def A(f):
+-		A=C.copy()
+-		if _B in A:A[_B]=inspect.cleandoc(A[_B])
+-		D=A.pop(_E,Option);_param_memo(f,D(B,**A));return f
+-	return A
+-def confirmation_option(*B,**A):
+-	"Shortcut for confirmation prompts that can be ignored by passing\n    ``--yes`` as parameter.\n\n    This is equivalent to decorating a function with :func:`option` with\n    the following parameters::\n\n        def callback(ctx, param, value):\n            if not value:\n                ctx.abort()\n\n        @click.command()\n        @click.option('--yes', is_flag=True, callback=callback,\n                      expose_value=False, prompt='Do you want to continue?')\n        def dropdb():\n            pass\n    "
+-	def C(f):
+-		def C(ctx,param,value):
+-			if not value:ctx.abort()
+-		A.setdefault(_F,_C);A.setdefault(_G,C);A.setdefault(_H,_D);A.setdefault(_I,'Do you want to continue?');A.setdefault(_B,'Confirm the action without prompting.');return option(*B or('--yes',),**A)(f)
+-	return C
+-def password_option(*B,**A):
+-	"Shortcut for password prompts.\n\n    This is equivalent to decorating a function with :func:`option` with\n    the following parameters::\n\n        @click.command()\n        @click.option('--password', prompt=True, confirmation_prompt=True,\n                      hide_input=True)\n        def changeadmin(password):\n            pass\n    "
+-	def C(f):A.setdefault(_I,_C);A.setdefault('confirmation_prompt',_C);A.setdefault('hide_input',_C);return option(*B or('--password',),**A)(f)
+-	return C
+-def version_option(version=_A,*B,**A):
+-	"Adds a ``--version`` option which immediately ends the program\n    printing out the version number.  This is implemented as an eager\n    option that prints the version and exits the program in the callback.\n\n    :param version: the version number to show.  If not provided Click\n                    attempts an auto discovery via setuptools.\n    :param prog_name: the name of the program (defaults to autodetection)\n    :param message: custom message to show instead of the default\n                    (``'%(prog)s, version %(version)s'``)\n    :param others: everything else is forwarded to :func:`option`.\n    ";D=version
+-	if D is _A:
+-		if hasattr(sys,'_getframe'):E=sys._getframe(1).f_globals.get('__name__')
+-		else:E=''
+-	def C(f):
+-		G=A.pop('prog_name',_A);H=A.pop('message','%(prog)s, version %(version)s')
+-		def C(ctx,param,value):
+-			A=ctx
+-			if not value or A.resilient_parsing:return
+-			C=G
+-			if C is _A:C=A.find_root().info_name
+-			B=D
+-			if B is _A:
+-				try:import pkg_resources as I
+-				except ImportError:pass
+-				else:
+-					for F in I.working_set:
+-						J=F.get_entry_map().get('console_scripts')or{}
+-						for K in J.values():
+-							if K.module_name==E:B=F.version;break
+-				if B is _A:raise RuntimeError('Could not determine version')
+-			echo(H%{'prog':C,'version':B},color=A.color);A.exit()
+-		A.setdefault(_F,_C);A.setdefault(_H,_D);A.setdefault(_J,_C);A.setdefault(_B,'Show the version and exit.');A[_G]=C;return option(*B or('--version',),**A)(f)
+-	return C
+-def help_option(*B,**A):
+-	'Adds a ``--help`` option which immediately ends the program\n    printing out the help page.  This is usually unnecessary to add as\n    this is added by default to all commands unless suppressed.\n\n    Like :func:`version_option`, this is implemented as eager option that\n    prints in the callback and exits.\n\n    All arguments are forwarded to :func:`option`.\n    '
+-	def C(f):
+-		def C(ctx,param,value):
+-			A=ctx
+-			if value and not A.resilient_parsing:echo(A.get_help(),color=A.color);A.exit()
+-		A.setdefault(_F,_C);A.setdefault(_H,_D);A.setdefault(_B,'Show this message and exit.');A.setdefault(_J,_C);A[_G]=C;return option(*B or('--help',),**A)(f)
+-	return C
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/exceptions.py b/dynaconf/vendor/click/exceptions.py
+deleted file mode 100644
+index 6cc0189..0000000
+--- a/dynaconf/vendor/click/exceptions.py
++++ /dev/null
+@@ -1,76 +0,0 @@
+-_A=None
+-from ._compat import filename_to_ui,get_text_stderr
+-from .utils import echo
+-def _join_param_hints(param_hint):
+-	A=param_hint
+-	if isinstance(A,(tuple,list)):return ' / '.join((repr(B)for B in A))
+-	return A
+-class ClickException(Exception):
+-	exit_code=1
+-	def __init__(B,message):A=message;super().__init__(A);B.message=A
+-	def format_message(A):return A.message
+-	def __str__(A):return A.message
+-	def show(B,file=_A):
+-		A=file
+-		if A is _A:A=get_text_stderr()
+-		echo(f"Error: {B.format_message()}",file=A)
+-class UsageError(ClickException):
+-	exit_code=2
+-	def __init__(A,message,ctx=_A):ClickException.__init__(A,message);A.ctx=ctx;A.cmd=A.ctx.command if A.ctx else _A
+-	def show(A,file=_A):
+-		B=file
+-		if B is _A:B=get_text_stderr()
+-		C=_A;D=''
+-		if A.cmd is not _A and A.cmd.get_help_option(A.ctx)is not _A:D=f"Try '{A.ctx.command_path} {A.ctx.help_option_names[0]}' for help.\n"
+-		if A.ctx is not _A:C=A.ctx.color;echo(f"{A.ctx.get_usage()}\n{D}",file=B,color=C)
+-		echo(f"Error: {A.format_message()}",file=B,color=C)
+-class BadParameter(UsageError):
+-	def __init__(A,message,ctx=_A,param=_A,param_hint=_A):UsageError.__init__(A,message,ctx);A.param=param;A.param_hint=param_hint
+-	def format_message(A):
+-		if A.param_hint is not _A:B=A.param_hint
+-		elif A.param is not _A:B=A.param.get_error_hint(A.ctx)
+-		else:return f"Invalid value: {A.message}"
+-		B=_join_param_hints(B);return f"Invalid value for {B}: {A.message}"
+-class MissingParameter(BadParameter):
+-	def __init__(A,message=_A,ctx=_A,param=_A,param_hint=_A,param_type=_A):BadParameter.__init__(A,message,ctx,param,param_hint);A.param_type=param_type
+-	def format_message(A):
+-		if A.param_hint is not _A:B=A.param_hint
+-		elif A.param is not _A:B=A.param.get_error_hint(A.ctx)
+-		else:B=_A
+-		B=_join_param_hints(B);D=A.param_type
+-		if D is _A and A.param is not _A:D=A.param.param_type_name
+-		C=A.message
+-		if A.param is not _A:
+-			E=A.param.type.get_missing_message(A.param)
+-			if E:
+-				if C:C+=f".  {E}"
+-				else:C=E
+-		F=f" {B}"if B else'';return f"Missing {D}{F}.{' 'if C else''}{C or''}"
+-	def __str__(A):
+-		if A.message is _A:B=A.param.name if A.param else _A;return f"missing parameter: {B}"
+-		else:return A.message
+-class NoSuchOption(UsageError):
+-	def __init__(A,option_name,message=_A,possibilities=_A,ctx=_A):
+-		C=option_name;B=message
+-		if B is _A:B=f"no such option: {C}"
+-		UsageError.__init__(A,B,ctx);A.option_name=C;A.possibilities=possibilities
+-	def format_message(A):
+-		B=[A.message]
+-		if A.possibilities:
+-			if len(A.possibilities)==1:B.append(f"Did you mean {A.possibilities[0]}?")
+-			else:C=sorted(A.possibilities);B.append(f"(Possible options: {', '.join(C)})")
+-		return '  '.join(B)
+-class BadOptionUsage(UsageError):
+-	def __init__(A,option_name,message,ctx=_A):UsageError.__init__(A,message,ctx);A.option_name=option_name
+-class BadArgumentUsage(UsageError):
+-	def __init__(A,message,ctx=_A):UsageError.__init__(A,message,ctx)
+-class FileError(ClickException):
+-	def __init__(A,filename,hint=_A):
+-		C=filename;B=hint;D=filename_to_ui(C)
+-		if B is _A:B='unknown error'
+-		ClickException.__init__(A,B);A.ui_filename=D;A.filename=C
+-	def format_message(A):return f"Could not open file {A.ui_filename}: {A.message}"
+-class Abort(RuntimeError):0
+-class Exit(RuntimeError):
+-	__slots__='exit_code',
+-	def __init__(A,code=0):A.exit_code=code
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/formatting.py b/dynaconf/vendor/click/formatting.py
+deleted file mode 100644
+index df18661..0000000
+--- a/dynaconf/vendor/click/formatting.py
++++ /dev/null
+@@ -1,90 +0,0 @@
+-_E=True
+-_D=False
+-_C=' '
+-_B='\n'
+-_A=None
+-from contextlib import contextmanager
+-from ._compat import term_len
+-from .parser import split_opt
+-from .termui import get_terminal_size
+-FORCED_WIDTH=_A
+-def measure_table(rows):
+-	A={}
+-	for C in rows:
+-		for (B,D) in enumerate(C):A[B]=max(A.get(B,0),term_len(D))
+-	return tuple((B for(C,B)in sorted(A.items())))
+-def iter_rows(rows,col_count):
+-	for A in rows:A=tuple(A);yield A+('',)*(col_count-len(A))
+-def wrap_text(text,width=78,initial_indent='',subsequent_indent='',preserve_paragraphs=_D):
+-	A=text;from ._textwrap import TextWrapper as I;A=A.expandtabs();E=I(width,initial_indent=initial_indent,subsequent_indent=subsequent_indent,replace_whitespace=_D)
+-	if not preserve_paragraphs:return E.fill(A)
+-	F=[];C=[];B=_A
+-	def H():
+-		if not C:return
+-		if C[0].strip()=='\x08':F.append((B or 0,_E,_B.join(C[1:])))
+-		else:F.append((B or 0,_D,_C.join(C)))
+-		del C[:]
+-	for D in A.splitlines():
+-		if not D:H();B=_A
+-		else:
+-			if B is _A:J=term_len(D);D=D.lstrip();B=J-term_len(D)
+-			C.append(D)
+-	H();G=[]
+-	for (B,K,A) in F:
+-		with E.extra_indent(_C*B):
+-			if K:G.append(E.indent_only(A))
+-			else:G.append(E.fill(A))
+-	return '\n\n'.join(G)
+-class HelpFormatter:
+-	def __init__(B,indent_increment=2,width=_A,max_width=_A):
+-		C=max_width;A=width;B.indent_increment=indent_increment
+-		if C is _A:C=80
+-		if A is _A:
+-			A=FORCED_WIDTH
+-			if A is _A:A=max(min(get_terminal_size()[0],C)-2,50)
+-		B.width=A;B.current_indent=0;B.buffer=[]
+-	def write(A,string):A.buffer.append(string)
+-	def indent(A):A.current_indent+=A.indent_increment
+-	def dedent(A):A.current_indent-=A.indent_increment
+-	def write_usage(A,prog,args='',prefix='Usage: '):
+-		E=prefix;B=f"{E:>{A.current_indent}}{prog} ";D=A.width-A.current_indent
+-		if D>=term_len(B)+20:C=_C*term_len(B);A.write(wrap_text(args,D,initial_indent=B,subsequent_indent=C))
+-		else:A.write(B);A.write(_B);C=_C*(max(A.current_indent,term_len(E))+4);A.write(wrap_text(args,D,initial_indent=C,subsequent_indent=C))
+-		A.write(_B)
+-	def write_heading(A,heading):A.write(f"{'':>{A.current_indent}}{heading}:\n")
+-	def write_paragraph(A):
+-		if A.buffer:A.write(_B)
+-	def write_text(A,text):C=max(A.width-A.current_indent,11);B=_C*A.current_indent;A.write(wrap_text(text,C,initial_indent=B,subsequent_indent=B,preserve_paragraphs=_E));A.write(_B)
+-	def write_dl(A,rows,col_max=30,col_spacing=2):
+-		G=col_spacing;C=rows;C=list(C);E=measure_table(C)
+-		if len(E)!=2:raise TypeError('Expected two columns for definition list')
+-		B=min(E[0],col_max)+G
+-		for (F,H) in iter_rows(C,len(E)):
+-			A.write(f"{'':>{A.current_indent}}{F}")
+-			if not H:A.write(_B);continue
+-			if term_len(F)<=B-G:A.write(_C*(B-term_len(F)))
+-			else:A.write(_B);A.write(_C*(B+A.current_indent))
+-			I=max(A.width-B-2,10);J=wrap_text(H,I,preserve_paragraphs=_E);D=J.splitlines()
+-			if D:
+-				A.write(f"{D[0]}\n")
+-				for K in D[1:]:A.write(f"{'':>{B+A.current_indent}}{K}\n")
+-				if len(D)>1:A.write(_B)
+-			else:A.write(_B)
+-	@contextmanager
+-	def section(self,name):
+-		A=self;A.write_paragraph();A.write_heading(name);A.indent()
+-		try:yield
+-		finally:A.dedent()
+-	@contextmanager
+-	def indentation(self):
+-		self.indent()
+-		try:yield
+-		finally:self.dedent()
+-	def getvalue(A):return ''.join(A.buffer)
+-def join_options(options):
+-	A=[];B=_D
+-	for C in options:
+-		D=split_opt(C)[0]
+-		if D=='/':B=_E
+-		A.append((len(D),C))
+-	A.sort(key=lambda x:x[0]);A=', '.join((B[1]for B in A));return A,B
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/globals.py b/dynaconf/vendor/click/globals.py
+deleted file mode 100644
+index e0b71c5..0000000
+--- a/dynaconf/vendor/click/globals.py
++++ /dev/null
+@@ -1,14 +0,0 @@
+-_A=None
+-from threading import local
+-_local=local()
+-def get_current_context(silent=False):
+-	try:return _local.stack[-1]
+-	except (AttributeError,IndexError):
+-		if not silent:raise RuntimeError('There is no active click context.')
+-def push_context(ctx):_local.__dict__.setdefault('stack',[]).append(ctx)
+-def pop_context():_local.stack.pop()
+-def resolve_color_default(color=_A):
+-	A=color
+-	if A is not _A:return A
+-	B=get_current_context(silent=True)
+-	if B is not _A:return B.color
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/parser.py b/dynaconf/vendor/click/parser.py
+deleted file mode 100644
+index 769c403..0000000
+--- a/dynaconf/vendor/click/parser.py
++++ /dev/null
+@@ -1,157 +0,0 @@
+-_D=False
+-_C='append'
+-_B='store'
+-_A=None
+-import re
+-from collections import deque
+-from .exceptions import BadArgumentUsage
+-from .exceptions import BadOptionUsage
+-from .exceptions import NoSuchOption
+-from .exceptions import UsageError
+-def _unpack_args(args,nargs_spec):
+-	D=nargs_spec;C=args;C=deque(C);D=deque(D);A=[];B=_A
+-	def F(c):
+-		try:
+-			if B is _A:return c.popleft()
+-			else:return c.pop()
+-		except IndexError:return _A
+-	while D:
+-		E=F(D)
+-		if E==1:A.append(F(C))
+-		elif E>1:
+-			G=[F(C)for A in range(E)]
+-			if B is not _A:G.reverse()
+-			A.append(tuple(G))
+-		elif E<0:
+-			if B is not _A:raise TypeError('Cannot have two nargs < 0')
+-			B=len(A);A.append(_A)
+-	if B is not _A:A[B]=tuple(C);C=[];A[B+1:]=reversed(A[B+1:])
+-	return tuple(A),list(C)
+-def _error_opt_args(nargs,opt):
+-	B=nargs;A=opt
+-	if B==1:raise BadOptionUsage(A,f"{A} option requires an argument")
+-	raise BadOptionUsage(A,f"{A} option requires {B} arguments")
+-def split_opt(opt):
+-	A=opt;B=A[:1]
+-	if B.isalnum():return'',A
+-	if A[1:2]==B:return A[:2],A[2:]
+-	return B,A[1:]
+-def normalize_opt(opt,ctx):
+-	B=ctx;A=opt
+-	if B is _A or B.token_normalize_func is _A:return A
+-	C,A=split_opt(A);return f"{C}{B.token_normalize_func(A)}"
+-def split_arg_string(string):
+-	B=string;C=[]
+-	for D in re.finditer('(\'([^\'\\\\]*(?:\\\\.[^\'\\\\]*)*)\'|\\"([^\\"\\\\]*(?:\\\\.[^\\"\\\\]*)*)\\"|\\S+)\\s*',B,re.S):
+-		A=D.group().strip()
+-		if A[:1]==A[-1:]and A[:1]in'"\'':A=A[1:-1].encode('ascii','backslashreplace').decode('unicode-escape')
+-		try:A=type(B)(A)
+-		except UnicodeError:pass
+-		C.append(A)
+-	return C
+-class Option:
+-	def __init__(A,opts,dest,action=_A,nargs=1,const=_A,obj=_A):
+-		D=action;A._short_opts=[];A._long_opts=[];A.prefixes=set()
+-		for B in opts:
+-			C,E=split_opt(B)
+-			if not C:raise ValueError(f"Invalid start character for option ({B})")
+-			A.prefixes.add(C[0])
+-			if len(C)==1 and len(E)==1:A._short_opts.append(B)
+-			else:A._long_opts.append(B);A.prefixes.add(C)
+-		if D is _A:D=_B
+-		A.dest=dest;A.action=D;A.nargs=nargs;A.const=const;A.obj=obj
+-	@property
+-	def takes_value(self):return self.action in(_B,_C)
+-	def process(A,value,state):
+-		C=value;B=state
+-		if A.action==_B:B.opts[A.dest]=C
+-		elif A.action=='store_const':B.opts[A.dest]=A.const
+-		elif A.action==_C:B.opts.setdefault(A.dest,[]).append(C)
+-		elif A.action=='append_const':B.opts.setdefault(A.dest,[]).append(A.const)
+-		elif A.action=='count':B.opts[A.dest]=B.opts.get(A.dest,0)+1
+-		else:raise ValueError(f"unknown action '{A.action}'")
+-		B.order.append(A.obj)
+-class Argument:
+-	def __init__(A,dest,nargs=1,obj=_A):A.dest=dest;A.nargs=nargs;A.obj=obj
+-	def process(A,value,state):
+-		C=state;B=value
+-		if A.nargs>1:
+-			D=sum((1 for A in B if A is _A))
+-			if D==len(B):B=_A
+-			elif D!=0:raise BadArgumentUsage(f"argument {A.dest} takes {A.nargs} values")
+-		C.opts[A.dest]=B;C.order.append(A.obj)
+-class ParsingState:
+-	def __init__(A,rargs):A.opts={};A.largs=[];A.rargs=rargs;A.order=[]
+-class OptionParser:
+-	def __init__(A,ctx=_A):
+-		B=ctx;A.ctx=B;A.allow_interspersed_args=True;A.ignore_unknown_options=_D
+-		if B is not _A:A.allow_interspersed_args=B.allow_interspersed_args;A.ignore_unknown_options=B.ignore_unknown_options
+-		A._short_opt={};A._long_opt={};A._opt_prefixes={'-','--'};A._args=[]
+-	def add_option(B,opts,dest,action=_A,nargs=1,const=_A,obj=_A):
+-		D=obj;C=opts
+-		if D is _A:D=dest
+-		C=[normalize_opt(A,B.ctx)for A in C];A=Option(C,dest,action=action,nargs=nargs,const=const,obj=D);B._opt_prefixes.update(A.prefixes)
+-		for E in A._short_opts:B._short_opt[E]=A
+-		for E in A._long_opts:B._long_opt[E]=A
+-	def add_argument(B,dest,nargs=1,obj=_A):
+-		A=obj
+-		if A is _A:A=dest
+-		B._args.append(Argument(dest=dest,nargs=nargs,obj=A))
+-	def parse_args(B,args):
+-		A=ParsingState(args)
+-		try:B._process_args_for_options(A);B._process_args_for_args(A)
+-		except UsageError:
+-			if B.ctx is _A or not B.ctx.resilient_parsing:raise
+-		return A.opts,A.largs,A.order
+-	def _process_args_for_args(B,state):
+-		A=state;C,D=_unpack_args(A.largs+A.rargs,[A.nargs for A in B._args])
+-		for (E,F) in enumerate(B._args):F.process(C[E],A)
+-		A.largs=D;A.rargs=[]
+-	def _process_args_for_options(C,state):
+-		B=state
+-		while B.rargs:
+-			A=B.rargs.pop(0);D=len(A)
+-			if A=='--':return
+-			elif A[:1]in C._opt_prefixes and D>1:C._process_opts(A,B)
+-			elif C.allow_interspersed_args:B.largs.append(A)
+-			else:B.rargs.insert(0,A);return
+-	def _match_long_opt(D,opt,explicit_value,state):
+-		E=explicit_value;B=state;A=opt
+-		if A not in D._long_opt:H=[B for B in D._long_opt if B.startswith(A)];raise NoSuchOption(A,possibilities=H,ctx=D.ctx)
+-		F=D._long_opt[A]
+-		if F.takes_value:
+-			if E is not _A:B.rargs.insert(0,E)
+-			C=F.nargs
+-			if len(B.rargs)<C:_error_opt_args(C,A)
+-			elif C==1:G=B.rargs.pop(0)
+-			else:G=tuple(B.rargs[:C]);del B.rargs[:C]
+-		elif E is not _A:raise BadOptionUsage(A,f"{A} option does not take a value")
+-		else:G=_A
+-		F.process(G,B)
+-	def _match_short_opt(B,arg,state):
+-		D=arg;A=state;J=_D;F=1;K=D[0];G=[]
+-		for L in D[1:]:
+-			H=normalize_opt(f"{K}{L}",B.ctx);E=B._short_opt.get(H);F+=1
+-			if not E:
+-				if B.ignore_unknown_options:G.append(L);continue
+-				raise NoSuchOption(H,ctx=B.ctx)
+-			if E.takes_value:
+-				if F<len(D):A.rargs.insert(0,D[F:]);J=True
+-				C=E.nargs
+-				if len(A.rargs)<C:_error_opt_args(C,H)
+-				elif C==1:I=A.rargs.pop(0)
+-				else:I=tuple(A.rargs[:C]);del A.rargs[:C]
+-			else:I=_A
+-			E.process(I,A)
+-			if J:break
+-		if B.ignore_unknown_options and G:A.largs.append(f"{K}{''.join(G)}")
+-	def _process_opts(B,arg,state):
+-		G='=';C=state;A=arg;D=_A
+-		if G in A:E,D=A.split(G,1)
+-		else:E=A
+-		F=normalize_opt(E,B.ctx)
+-		try:B._match_long_opt(F,D,C)
+-		except NoSuchOption:
+-			if A[:2]not in B._opt_prefixes:return B._match_short_opt(A,C)
+-			if not B.ignore_unknown_options:raise
+-			C.largs.append(A)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/termui.py b/dynaconf/vendor/click/termui.py
+deleted file mode 100644
+index 2f2fdfe..0000000
+--- a/dynaconf/vendor/click/termui.py
++++ /dev/null
+@@ -1,135 +0,0 @@
+-_C=True
+-_B=False
+-_A=None
+-import inspect,io,itertools,os,struct,sys
+-from ._compat import DEFAULT_COLUMNS,get_winterm_size,isatty,strip_ansi,WIN
+-from .exceptions import Abort
+-from .exceptions import UsageError
+-from .globals import resolve_color_default
+-from .types import Choice
+-from .types import convert_type
+-from .types import Path
+-from .utils import echo
+-from .utils import LazyFile
+-visible_prompt_func=input
+-_ansi_colors={'black':30,'red':31,'green':32,'yellow':33,'blue':34,'magenta':35,'cyan':36,'white':37,'reset':39,'bright_black':90,'bright_red':91,'bright_green':92,'bright_yellow':93,'bright_blue':94,'bright_magenta':95,'bright_cyan':96,'bright_white':97}
+-_ansi_reset_all='\x1b[0m'
+-def hidden_prompt_func(prompt):import getpass as A;return A.getpass(prompt)
+-def _build_prompt(text,suffix,show_default=_B,default=_A,show_choices=_C,type=_A):
+-	B=default;A=text
+-	if type is not _A and show_choices and isinstance(type,Choice):A+=f" ({', '.join(map(str,type.choices))})"
+-	if B is not _A and show_default:A=f"{A} [{_format_default(B)}]"
+-	return f"{A}{suffix}"
+-def _format_default(default):
+-	A=default
+-	if isinstance(A,(io.IOBase,LazyFile))and hasattr(A,'name'):return A.name
+-	return A
+-def prompt(text,default=_A,hide_input=_B,confirmation_prompt=_B,type=_A,value_proc=_A,prompt_suffix=': ',show_default=_C,err=_B,show_choices=_C):
+-	F=hide_input;C=err;B=value_proc;A=default;E=_A
+-	def G(text):
+-		A=hidden_prompt_func if F else visible_prompt_func
+-		try:echo(text,nl=_B,err=C);return A('')
+-		except (KeyboardInterrupt,EOFError):
+-			if F:echo(_A,err=C)
+-			raise Abort()
+-	if B is _A:B=convert_type(type,A)
+-	I=_build_prompt(text,prompt_suffix,show_default,A,show_choices,type)
+-	while 1:
+-		while 1:
+-			D=G(I)
+-			if D:break
+-			elif A is not _A:
+-				if isinstance(B,Path):D=A;break
+-				return A
+-		try:E=B(D)
+-		except UsageError as J:echo(f"Error: {J.message}",err=C);continue
+-		if not confirmation_prompt:return E
+-		while 1:
+-			H=G('Repeat for confirmation: ')
+-			if H:break
+-		if D==H:return E
+-		echo('Error: the two entered values do not match',err=C)
+-def confirm(text,default=_B,abort=_B,prompt_suffix=': ',show_default=_C,err=_B):
+-	C=default;D=_build_prompt(text,prompt_suffix,show_default,'Y/n'if C else'y/N')
+-	while 1:
+-		try:echo(D,nl=_B,err=err);B=visible_prompt_func('').lower().strip()
+-		except (KeyboardInterrupt,EOFError):raise Abort()
+-		if B in('y','yes'):A=_C
+-		elif B in('n','no'):A=_B
+-		elif B=='':A=C
+-		else:echo('Error: invalid input',err=err);continue
+-		break
+-	if abort and not A:raise Abort()
+-	return A
+-def get_terminal_size():
+-	import shutil as C
+-	if hasattr(C,'get_terminal_size'):return C.get_terminal_size()
+-	if get_winterm_size is not _A:
+-		D=get_winterm_size()
+-		if D==(0,0):return 79,24
+-		else:return D
+-	def B(fd):
+-		try:import fcntl,termios as A;B=struct.unpack('hh',fcntl.ioctl(fd,A.TIOCGWINSZ,'1234'))
+-		except Exception:return
+-		return B
+-	A=B(0)or B(1)or B(2)
+-	if not A:
+-		try:
+-			E=os.open(os.ctermid(),os.O_RDONLY)
+-			try:A=B(E)
+-			finally:os.close(E)
+-		except Exception:pass
+-	if not A or not A[0]or not A[1]:A=os.environ.get('LINES',25),os.environ.get('COLUMNS',DEFAULT_COLUMNS)
+-	return int(A[1]),int(A[0])
+-def echo_via_pager(text_or_generator,color=_A):
+-	B=color;A=text_or_generator;B=resolve_color_default(B)
+-	if inspect.isgeneratorfunction(A):C=A()
+-	elif isinstance(A,str):C=[A]
+-	else:C=iter(A)
+-	D=(A if isinstance(A,str)else str(A)for A in C);from ._termui_impl import pager;return pager(itertools.chain(D,'\n'),B)
+-def progressbar(iterable=_A,length=_A,label=_A,show_eta=_C,show_percent=_A,show_pos=_B,item_show_func=_A,fill_char='#',empty_char='-',bar_template='%(label)s  [%(bar)s]  %(info)s',info_sep='  ',width=36,file=_A,color=_A):A=color;from ._termui_impl import ProgressBar as B;A=resolve_color_default(A);return B(iterable=iterable,length=length,show_eta=show_eta,show_percent=show_percent,show_pos=show_pos,item_show_func=item_show_func,fill_char=fill_char,empty_char=empty_char,bar_template=bar_template,info_sep=info_sep,file=file,label=label,width=width,color=A)
+-def clear():
+-	if not isatty(sys.stdout):return
+-	if WIN:os.system('cls')
+-	else:sys.stdout.write('\x1b[2J\x1b[1;1H')
+-def style(text,fg=_A,bg=_A,bold=_A,dim=_A,underline=_A,blink=_A,reverse=_A,reset=_C):
+-	D=reverse;C=blink;B=underline;A=[]
+-	if fg:
+-		try:A.append(f"^[[{_ansi_colors[fg]}m")
+-		except KeyError:raise TypeError(f"Unknown color {fg!r}")
+-	if bg:
+-		try:A.append(f"^[[{_ansi_colors[bg]+10}m")
+-		except KeyError:raise TypeError(f"Unknown color {bg!r}")
+-	if bold is not _A:A.append(f"^[[{1 if bold else 22}m")
+-	if dim is not _A:A.append(f"^[[{2 if dim else 22}m")
+-	if B is not _A:A.append(f"^[[{4 if B else 24}m")
+-	if C is not _A:A.append(f"^[[{5 if C else 25}m")
+-	if D is not _A:A.append(f"^[[{7 if D else 27}m")
+-	A.append(text)
+-	if reset:A.append(_ansi_reset_all)
+-	return ''.join(A)
+-def unstyle(text):return strip_ansi(text)
+-def secho(message=_A,file=_A,nl=_C,err=_B,color=_A,**B):
+-	A=message
+-	if A is not _A:A=style(A,**B)
+-	return echo(A,file=file,nl=nl,err=err,color=color)
+-def edit(text=_A,editor=_A,env=_A,require_save=_C,extension='.txt',filename=_A):
+-	B=filename;A=editor;from ._termui_impl import Editor as C;A=C(editor=A,env=env,require_save=require_save,extension=extension)
+-	if B is _A:return A.edit(text)
+-	A.edit_file(B)
+-def launch(url,wait=_B,locate=_B):from ._termui_impl import open_url as A;return A(url,wait=wait,locate=locate)
+-_getchar=_A
+-def getchar(echo=_B):
+-	A=_getchar
+-	if A is _A:from ._termui_impl import getchar as A
+-	return A(echo)
+-def raw_terminal():from ._termui_impl import raw_terminal as A;return A()
+-def pause(info='Press any key to continue ...',err=_B):
+-	A=info
+-	if not isatty(sys.stdin)or not isatty(sys.stdout):return
+-	try:
+-		if A:echo(A,nl=_B,err=err)
+-		try:getchar()
+-		except (KeyboardInterrupt,EOFError):pass
+-	finally:
+-		if A:echo(err=err)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/testing.py b/dynaconf/vendor/click/testing.py
+deleted file mode 100644
+index f78bc5f..0000000
+--- a/dynaconf/vendor/click/testing.py
++++ /dev/null
+@@ -1,108 +0,0 @@
+-_E='replace'
+-_D=False
+-_C='\n'
+-_B='\r\n'
+-_A=None
+-import contextlib,io,os,shlex,shutil,sys,tempfile
+-from .  import formatting,termui,utils
+-from ._compat import _find_binary_reader
+-class EchoingStdin:
+-	def __init__(A,input,output):A._input=input;A._output=output
+-	def __getattr__(A,x):return getattr(A._input,x)
+-	def _echo(A,rv):A._output.write(rv);return rv
+-	def read(A,n=-1):return A._echo(A._input.read(n))
+-	def readline(A,n=-1):return A._echo(A._input.readline(n))
+-	def readlines(A):return[A._echo(B)for B in A._input.readlines()]
+-	def __iter__(A):return iter((A._echo(B)for B in A._input))
+-	def __repr__(A):return repr(A._input)
+-def make_input_stream(input,charset):
+-	if hasattr(input,'read'):
+-		A=_find_binary_reader(input)
+-		if A is not _A:return A
+-		raise TypeError('Could not find binary reader for input stream.')
+-	if input is _A:input=b''
+-	elif not isinstance(input,bytes):input=input.encode(charset)
+-	return io.BytesIO(input)
+-class Result:
+-	def __init__(A,runner,stdout_bytes,stderr_bytes,exit_code,exception,exc_info=_A):A.runner=runner;A.stdout_bytes=stdout_bytes;A.stderr_bytes=stderr_bytes;A.exit_code=exit_code;A.exception=exception;A.exc_info=exc_info
+-	@property
+-	def output(self):return self.stdout
+-	@property
+-	def stdout(self):return self.stdout_bytes.decode(self.runner.charset,_E).replace(_B,_C)
+-	@property
+-	def stderr(self):
+-		A=self
+-		if A.stderr_bytes is _A:raise ValueError('stderr not separately captured')
+-		return A.stderr_bytes.decode(A.runner.charset,_E).replace(_B,_C)
+-	def __repr__(A):B=repr(A.exception)if A.exception else'okay';return f"<{type(A).__name__} {B}>"
+-class CliRunner:
+-	def __init__(A,charset='utf-8',env=_A,echo_stdin=_D,mix_stderr=True):A.charset=charset;A.env=env or{};A.echo_stdin=echo_stdin;A.mix_stderr=mix_stderr
+-	def get_default_prog_name(A,cli):return cli.name or'root'
+-	def make_env(C,overrides=_A):
+-		A=overrides;B=dict(C.env)
+-		if A:B.update(A)
+-		return B
+-	@contextlib.contextmanager
+-	def isolation(self,input=_A,env=_A,color=_D):
+-		D=env;A=self;input=make_input_stream(input,A.charset);H=sys.stdin;I=sys.stdout;J=sys.stderr;K=formatting.FORCED_WIDTH;formatting.FORCED_WIDTH=80;D=A.make_env(D);E=io.BytesIO()
+-		if A.echo_stdin:input=EchoingStdin(input,E)
+-		input=io.TextIOWrapper(input,encoding=A.charset);sys.stdout=io.TextIOWrapper(E,encoding=A.charset)
+-		if not A.mix_stderr:F=io.BytesIO();sys.stderr=io.TextIOWrapper(F,encoding=A.charset)
+-		if A.mix_stderr:sys.stderr=sys.stdout
+-		sys.stdin=input
+-		def L(prompt=_A):sys.stdout.write(prompt or'');A=input.readline().rstrip(_B);sys.stdout.write(f"{A}\n");sys.stdout.flush();return A
+-		def M(prompt=_A):sys.stdout.write(f"{prompt or''}\n");sys.stdout.flush();return input.readline().rstrip(_B)
+-		def N(echo):
+-			A=sys.stdin.read(1)
+-			if echo:sys.stdout.write(A);sys.stdout.flush()
+-			return A
+-		O=color
+-		def P(stream=_A,color=_A):
+-			A=color
+-			if A is _A:return not O
+-			return not A
+-		Q=termui.visible_prompt_func;R=termui.hidden_prompt_func;S=termui._getchar;T=utils.should_strip_ansi;termui.visible_prompt_func=L;termui.hidden_prompt_func=M;termui._getchar=N;utils.should_strip_ansi=P;G={}
+-		try:
+-			for (B,C) in D.items():
+-				G[B]=os.environ.get(B)
+-				if C is _A:
+-					try:del os.environ[B]
+-					except Exception:pass
+-				else:os.environ[B]=C
+-			yield(E,not A.mix_stderr and F)
+-		finally:
+-			for (B,C) in G.items():
+-				if C is _A:
+-					try:del os.environ[B]
+-					except Exception:pass
+-				else:os.environ[B]=C
+-			sys.stdout=I;sys.stderr=J;sys.stdin=H;termui.visible_prompt_func=Q;termui.hidden_prompt_func=R;termui._getchar=S;utils.should_strip_ansi=T;formatting.FORCED_WIDTH=K
+-	def invoke(B,cli,args=_A,input=_A,env=_A,catch_exceptions=True,color=_D,**G):
+-		C=args;E=_A
+-		with B.isolation(input=input,env=env,color=color)as H:
+-			F=_A;A=0
+-			if isinstance(C,str):C=shlex.split(C)
+-			try:I=G.pop('prog_name')
+-			except KeyError:I=B.get_default_prog_name(cli)
+-			try:cli.main(args=C or(),prog_name=I,**G)
+-			except SystemExit as D:
+-				E=sys.exc_info();A=D.code
+-				if A is _A:A=0
+-				if A!=0:F=D
+-				if not isinstance(A,int):sys.stdout.write(str(A));sys.stdout.write(_C);A=1
+-			except Exception as D:
+-				if not catch_exceptions:raise
+-				F=D;A=1;E=sys.exc_info()
+-			finally:
+-				sys.stdout.flush();K=H[0].getvalue()
+-				if B.mix_stderr:J=_A
+-				else:J=H[1].getvalue()
+-		return Result(runner=B,stdout_bytes=K,stderr_bytes=J,exit_code=A,exception=F,exc_info=E)
+-	@contextlib.contextmanager
+-	def isolated_filesystem(self):
+-		B=os.getcwd();A=tempfile.mkdtemp();os.chdir(A)
+-		try:yield A
+-		finally:
+-			os.chdir(B)
+-			try:shutil.rmtree(A)
+-			except OSError:pass
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/types.py b/dynaconf/vendor/click/types.py
+deleted file mode 100644
+index 30ee5fa..0000000
+--- a/dynaconf/vendor/click/types.py
++++ /dev/null
+@@ -1,227 +0,0 @@
+-_F='text'
+-_E='replace'
+-_D='utf-8'
+-_C=True
+-_B=False
+-_A=None
+-import os,stat
+-from datetime import datetime
+-from ._compat import _get_argv_encoding
+-from ._compat import filename_to_ui
+-from ._compat import get_filesystem_encoding
+-from ._compat import get_strerror
+-from ._compat import open_stream
+-from .exceptions import BadParameter
+-from .utils import LazyFile
+-from .utils import safecall
+-class ParamType:
+-	is_composite=_B;name=_A;envvar_list_splitter=_A
+-	def __call__(B,value,param=_A,ctx=_A):
+-		A=value
+-		if A is not _A:return B.convert(A,param,ctx)
+-	def get_metavar(A,param):0
+-	def get_missing_message(A,param):0
+-	def convert(A,value,param,ctx):return value
+-	def split_envvar_value(A,rv):return (rv or'').split(A.envvar_list_splitter)
+-	def fail(A,message,param=_A,ctx=_A):raise BadParameter(message,ctx=ctx,param=param)
+-class CompositeParamType(ParamType):
+-	is_composite=_C
+-	@property
+-	def arity(self):raise NotImplementedError()
+-class FuncParamType(ParamType):
+-	def __init__(A,func):A.name=func.__name__;A.func=func
+-	def convert(B,value,param,ctx):
+-		A=value
+-		try:return B.func(A)
+-		except ValueError:
+-			try:A=str(A)
+-			except UnicodeError:A=A.decode(_D,_E)
+-			B.fail(A,param,ctx)
+-class UnprocessedParamType(ParamType):
+-	name=_F
+-	def convert(A,value,param,ctx):return value
+-	def __repr__(A):return'UNPROCESSED'
+-class StringParamType(ParamType):
+-	name=_F
+-	def convert(D,value,param,ctx):
+-		A=value
+-		if isinstance(A,bytes):
+-			B=_get_argv_encoding()
+-			try:A=A.decode(B)
+-			except UnicodeError:
+-				C=get_filesystem_encoding()
+-				if C!=B:
+-					try:A=A.decode(C)
+-					except UnicodeError:A=A.decode(_D,_E)
+-				else:A=A.decode(_D,_E)
+-			return A
+-		return A
+-	def __repr__(A):return'STRING'
+-class Choice(ParamType):
+-	name='choice'
+-	def __init__(A,choices,case_sensitive=_C):A.choices=choices;A.case_sensitive=case_sensitive
+-	def get_metavar(A,param):return f"[{'|'.join(A.choices)}]"
+-	def get_missing_message(A,param):B=',\n\t'.join(A.choices);return f"Choose from:\n\t{B}"
+-	def convert(D,value,param,ctx):
+-		E=value;B=ctx;C=E;A={A:A for A in D.choices}
+-		if B is not _A and B.token_normalize_func is not _A:C=B.token_normalize_func(E);A={B.token_normalize_func(C):D for(C,D)in A.items()}
+-		if not D.case_sensitive:C=C.casefold();A={B.casefold():C for(B,C)in A.items()}
+-		if C in A:return A[C]
+-		D.fail(f"invalid choice: {E}. (choose from {', '.join(D.choices)})",param,B)
+-	def __repr__(A):return f"Choice({list(A.choices)})"
+-class DateTime(ParamType):
+-	name='datetime'
+-	def __init__(A,formats=_A):A.formats=formats or['%Y-%m-%d','%Y-%m-%dT%H:%M:%S','%Y-%m-%d %H:%M:%S']
+-	def get_metavar(A,param):return f"[{'|'.join(A.formats)}]"
+-	def _try_to_convert_date(A,value,format):
+-		try:return datetime.strptime(value,format)
+-		except ValueError:return _A
+-	def convert(A,value,param,ctx):
+-		B=value
+-		for format in A.formats:
+-			C=A._try_to_convert_date(B,format)
+-			if C:return C
+-		A.fail(f"invalid datetime format: {B}. (choose from {', '.join(A.formats)})")
+-	def __repr__(A):return'DateTime'
+-class IntParamType(ParamType):
+-	name='integer'
+-	def convert(B,value,param,ctx):
+-		A=value
+-		try:return int(A)
+-		except ValueError:B.fail(f"{A} is not a valid integer",param,ctx)
+-	def __repr__(A):return'INT'
+-class IntRange(IntParamType):
+-	name='integer range'
+-	def __init__(A,min=_A,max=_A,clamp=_B):A.min=min;A.max=max;A.clamp=clamp
+-	def convert(A,value,param,ctx):
+-		D=ctx;C=param;B=IntParamType.convert(A,value,C,D)
+-		if A.clamp:
+-			if A.min is not _A and B<A.min:return A.min
+-			if A.max is not _A and B>A.max:return A.max
+-		if A.min is not _A and B<A.min or A.max is not _A and B>A.max:
+-			if A.min is _A:A.fail(f"{B} is bigger than the maximum valid value {A.max}.",C,D)
+-			elif A.max is _A:A.fail(f"{B} is smaller than the minimum valid value {A.min}.",C,D)
+-			else:A.fail(f"{B} is not in the valid range of {A.min} to {A.max}.",C,D)
+-		return B
+-	def __repr__(A):return f"IntRange({A.min}, {A.max})"
+-class FloatParamType(ParamType):
+-	name='float'
+-	def convert(B,value,param,ctx):
+-		A=value
+-		try:return float(A)
+-		except ValueError:B.fail(f"{A} is not a valid floating point value",param,ctx)
+-	def __repr__(A):return'FLOAT'
+-class FloatRange(FloatParamType):
+-	name='float range'
+-	def __init__(A,min=_A,max=_A,clamp=_B):A.min=min;A.max=max;A.clamp=clamp
+-	def convert(A,value,param,ctx):
+-		D=ctx;C=param;B=FloatParamType.convert(A,value,C,D)
+-		if A.clamp:
+-			if A.min is not _A and B<A.min:return A.min
+-			if A.max is not _A and B>A.max:return A.max
+-		if A.min is not _A and B<A.min or A.max is not _A and B>A.max:
+-			if A.min is _A:A.fail(f"{B} is bigger than the maximum valid value {A.max}.",C,D)
+-			elif A.max is _A:A.fail(f"{B} is smaller than the minimum valid value {A.min}.",C,D)
+-			else:A.fail(f"{B} is not in the valid range of {A.min} to {A.max}.",C,D)
+-		return B
+-	def __repr__(A):return f"FloatRange({A.min}, {A.max})"
+-class BoolParamType(ParamType):
+-	name='boolean'
+-	def convert(B,value,param,ctx):
+-		A=value
+-		if isinstance(A,bool):return bool(A)
+-		A=A.lower()
+-		if A in('true','t','1','yes','y'):return _C
+-		elif A in('false','f','0','no','n'):return _B
+-		B.fail(f"{A} is not a valid boolean",param,ctx)
+-	def __repr__(A):return'BOOL'
+-class UUIDParameterType(ParamType):
+-	name='uuid'
+-	def convert(B,value,param,ctx):
+-		A=value;import uuid
+-		try:return uuid.UUID(A)
+-		except ValueError:B.fail(f"{A} is not a valid UUID value",param,ctx)
+-	def __repr__(A):return'UUID'
+-class File(ParamType):
+-	name='filename';envvar_list_splitter=os.path.pathsep
+-	def __init__(A,mode='r',encoding=_A,errors='strict',lazy=_A,atomic=_B):A.mode=mode;A.encoding=encoding;A.errors=errors;A.lazy=lazy;A.atomic=atomic
+-	def resolve_lazy_flag(A,value):
+-		if A.lazy is not _A:return A.lazy
+-		if value=='-':return _B
+-		elif'w'in A.mode:return _C
+-		return _B
+-	def convert(A,value,param,ctx):
+-		C=ctx;B=value
+-		try:
+-			if hasattr(B,'read')or hasattr(B,'write'):return B
+-			E=A.resolve_lazy_flag(B)
+-			if E:
+-				D=LazyFile(B,A.mode,A.encoding,A.errors,atomic=A.atomic)
+-				if C is not _A:C.call_on_close(D.close_intelligently)
+-				return D
+-			D,F=open_stream(B,A.mode,A.encoding,A.errors,atomic=A.atomic)
+-			if C is not _A:
+-				if F:C.call_on_close(safecall(D.close))
+-				else:C.call_on_close(safecall(D.flush))
+-			return D
+-		except OSError as G:A.fail(f"Could not open file: {filename_to_ui(B)}: {get_strerror(G)}",param,C)
+-class Path(ParamType):
+-	envvar_list_splitter=os.path.pathsep
+-	def __init__(A,exists=_B,file_okay=_C,dir_okay=_C,writable=_B,readable=_C,resolve_path=_B,allow_dash=_B,path_type=_A):
+-		A.exists=exists;A.file_okay=file_okay;A.dir_okay=dir_okay;A.writable=writable;A.readable=readable;A.resolve_path=resolve_path;A.allow_dash=allow_dash;A.type=path_type
+-		if A.file_okay and not A.dir_okay:A.name='file';A.path_type='File'
+-		elif A.dir_okay and not A.file_okay:A.name='directory';A.path_type='Directory'
+-		else:A.name='path';A.path_type='Path'
+-	def coerce_path_result(B,rv):
+-		A=rv
+-		if B.type is not _A and not isinstance(A,B.type):
+-			if B.type is str:A=A.decode(get_filesystem_encoding())
+-			else:A=A.encode(get_filesystem_encoding())
+-		return A
+-	def convert(A,value,param,ctx):
+-		E=ctx;D=param;B=value;C=B;G=A.file_okay and A.allow_dash and C in(b'-','-')
+-		if not G:
+-			if A.resolve_path:C=os.path.realpath(C)
+-			try:F=os.stat(C)
+-			except OSError:
+-				if not A.exists:return A.coerce_path_result(C)
+-				A.fail(f"{A.path_type} {filename_to_ui(B)!r} does not exist.",D,E)
+-			if not A.file_okay and stat.S_ISREG(F.st_mode):A.fail(f"{A.path_type} {filename_to_ui(B)!r} is a file.",D,E)
+-			if not A.dir_okay and stat.S_ISDIR(F.st_mode):A.fail(f"{A.path_type} {filename_to_ui(B)!r} is a directory.",D,E)
+-			if A.writable and not os.access(B,os.W_OK):A.fail(f"{A.path_type} {filename_to_ui(B)!r} is not writable.",D,E)
+-			if A.readable and not os.access(B,os.R_OK):A.fail(f"{A.path_type} {filename_to_ui(B)!r} is not readable.",D,E)
+-		return A.coerce_path_result(C)
+-class Tuple(CompositeParamType):
+-	def __init__(A,types):A.types=[convert_type(A)for A in types]
+-	@property
+-	def name(self):return f"<{' '.join((A.name for A in self.types))}>"
+-	@property
+-	def arity(self):return len(self.types)
+-	def convert(A,value,param,ctx):
+-		B=value
+-		if len(B)!=len(A.types):raise TypeError('It would appear that nargs is set to conflict with the composite type arity.')
+-		return tuple((C(D,param,ctx)for(C,D)in zip(A.types,B)))
+-def convert_type(ty,default=_A):
+-	B=default;A=ty;C=_B
+-	if A is _A and B is not _A:
+-		if isinstance(B,tuple):A=tuple(map(type,B))
+-		else:A=type(B)
+-		C=_C
+-	if isinstance(A,tuple):return Tuple(A)
+-	if isinstance(A,ParamType):return A
+-	if A is str or A is _A:return STRING
+-	if A is int:return INT
+-	if A is bool and not C:return BOOL
+-	if A is float:return FLOAT
+-	if C:return STRING
+-	if __debug__:
+-		try:
+-			if issubclass(A,ParamType):raise AssertionError(f"Attempted to use an uninstantiated parameter type ({A}).")
+-		except TypeError:pass
+-	return FuncParamType(A)
+-UNPROCESSED=UnprocessedParamType()
+-STRING=StringParamType()
+-INT=IntParamType()
+-FLOAT=FloatParamType()
+-BOOL=BoolParamType()
+-UUID=UUIDParameterType()
+\ No newline at end of file
+diff --git a/dynaconf/vendor/click/utils.py b/dynaconf/vendor/click/utils.py
+deleted file mode 100644
+index e0cf442..0000000
+--- a/dynaconf/vendor/click/utils.py
++++ /dev/null
+@@ -1,119 +0,0 @@
+-_D='strict'
+-_C=True
+-_B=False
+-_A=None
+-import os,sys
+-from ._compat import _default_text_stderr,_default_text_stdout,_find_binary_writer,auto_wrap_for_ansi,binary_streams,filename_to_ui,get_filesystem_encoding,get_strerror,is_bytes,open_stream,should_strip_ansi,strip_ansi,text_streams,WIN
+-from .globals import resolve_color_default
+-echo_native_types=str,bytes,bytearray
+-def _posixify(name):return '-'.join(name.split()).lower()
+-def safecall(func):
+-	def A(*A,**B):
+-		try:return func(*A,**B)
+-		except Exception:pass
+-	return A
+-def make_str(value):
+-	A=value
+-	if isinstance(A,bytes):
+-		try:return A.decode(get_filesystem_encoding())
+-		except UnicodeError:return A.decode('utf-8','replace')
+-	return str(A)
+-def make_default_short_help(help,max_length=45):
+-	F=help.split();D=0;A=[];C=_B
+-	for B in F:
+-		if B[-1:]=='.':C=_C
+-		E=1+len(B)if A else len(B)
+-		if D+E>max_length:A.append('...');C=_C
+-		else:
+-			if A:A.append(' ')
+-			A.append(B)
+-		if C:break
+-		D+=E
+-	return ''.join(A)
+-class LazyFile:
+-	def __init__(A,filename,mode='r',encoding=_A,errors=_D,atomic=_B):
+-		E=errors;D=encoding;C=mode;B=filename;A.name=B;A.mode=C;A.encoding=D;A.errors=E;A.atomic=atomic
+-		if B=='-':A._f,A.should_close=open_stream(B,C,D,E)
+-		else:
+-			if'r'in C:open(B,C).close()
+-			A._f=_A;A.should_close=_C
+-	def __getattr__(A,name):return getattr(A.open(),name)
+-	def __repr__(A):
+-		if A._f is not _A:return repr(A._f)
+-		return f"<unopened file '{A.name}' {A.mode}>"
+-	def open(A):
+-		if A._f is not _A:return A._f
+-		try:B,A.should_close=open_stream(A.name,A.mode,A.encoding,A.errors,atomic=A.atomic)
+-		except OSError as C:from .exceptions import FileError as D;raise D(A.name,hint=get_strerror(C))
+-		A._f=B;return B
+-	def close(A):
+-		if A._f is not _A:A._f.close()
+-	def close_intelligently(A):
+-		if A.should_close:A.close()
+-	def __enter__(A):return A
+-	def __exit__(A,exc_type,exc_value,tb):A.close_intelligently()
+-	def __iter__(A):A.open();return iter(A._f)
+-class KeepOpenFile:
+-	def __init__(A,file):A._file=file
+-	def __getattr__(A,name):return getattr(A._file,name)
+-	def __enter__(A):return A
+-	def __exit__(A,exc_type,exc_value,tb):0
+-	def __repr__(A):return repr(A._file)
+-	def __iter__(A):return iter(A._file)
+-def echo(message=_A,file=_A,nl=_C,err=_B,color=_A):
+-	C=color;B=file;A=message
+-	if B is _A:
+-		if err:B=_default_text_stderr()
+-		else:B=_default_text_stdout()
+-	if A is not _A and not isinstance(A,echo_native_types):A=str(A)
+-	if nl:
+-		A=A or''
+-		if isinstance(A,str):A+='\n'
+-		else:A+=b'\n'
+-	if A and is_bytes(A):
+-		D=_find_binary_writer(B)
+-		if D is not _A:B.flush();D.write(A);D.flush();return
+-	if A and not is_bytes(A):
+-		C=resolve_color_default(C)
+-		if should_strip_ansi(B,C):A=strip_ansi(A)
+-		elif WIN:
+-			if auto_wrap_for_ansi is not _A:B=auto_wrap_for_ansi(B)
+-			elif not C:A=strip_ansi(A)
+-	if A:B.write(A)
+-	B.flush()
+-def get_binary_stream(name):
+-	A=binary_streams.get(name)
+-	if A is _A:raise TypeError(f"Unknown standard stream '{name}'")
+-	return A()
+-def get_text_stream(name,encoding=_A,errors=_D):
+-	A=text_streams.get(name)
+-	if A is _A:raise TypeError(f"Unknown standard stream '{name}'")
+-	return A(encoding,errors)
+-def open_file(filename,mode='r',encoding=_A,errors=_D,lazy=_B,atomic=_B):
+-	E=atomic;D=errors;C=encoding;B=filename
+-	if lazy:return LazyFile(B,mode,C,D,atomic=E)
+-	A,F=open_stream(B,mode,C,D,atomic=E)
+-	if not F:A=KeepOpenFile(A)
+-	return A
+-def get_os_args():import warnings as A;A.warn("'get_os_args' is deprecated and will be removed in 8.1. Access 'sys.argv[1:]' directly instead.",DeprecationWarning,stacklevel=2);return sys.argv[1:]
+-def format_filename(filename,shorten=_B):
+-	A=filename
+-	if shorten:A=os.path.basename(A)
+-	return filename_to_ui(A)
+-def get_app_dir(app_name,roaming=_C,force_posix=_B):
+-	A=app_name
+-	if WIN:
+-		C='APPDATA'if roaming else'LOCALAPPDATA';B=os.environ.get(C)
+-		if B is _A:B=os.path.expanduser('~')
+-		return os.path.join(B,A)
+-	if force_posix:return os.path.join(os.path.expanduser(f"~/.{_posixify(A)}"))
+-	if sys.platform=='darwin':return os.path.join(os.path.expanduser('~/Library/Application Support'),A)
+-	return os.path.join(os.environ.get('XDG_CONFIG_HOME',os.path.expanduser('~/.config')),_posixify(A))
+-class PacifyFlushWrapper:
+-	def __init__(A,wrapped):A.wrapped=wrapped
+-	def flush(A):
+-		try:A.wrapped.flush()
+-		except OSError as B:
+-			import errno
+-			if B.errno!=errno.EPIPE:raise
+-	def __getattr__(A,attr):return getattr(A.wrapped,attr)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/dotenv/README.md b/dynaconf/vendor/dotenv/README.md
+deleted file mode 100644
+index 94a816f..0000000
+--- a/dynaconf/vendor/dotenv/README.md
++++ /dev/null
+@@ -1,6 +0,0 @@
+-## python-bodotenv
+-
+-Vendored dep taken from: https://github.com/theskumar/python-dotenv
+-Licensed under BSD: https://github.com/theskumar/python-dotenv/blob/master/LICENSE
+-
+-Current version: 0.13.0
+diff --git a/dynaconf/vendor/dotenv/__init__.py b/dynaconf/vendor/dotenv/__init__.py
+deleted file mode 100644
+index 25aa760..0000000
+--- a/dynaconf/vendor/dotenv/__init__.py
++++ /dev/null
+@@ -1,18 +0,0 @@
+-_A=None
+-from .compat import IS_TYPE_CHECKING
+-from .main import load_dotenv,get_key,set_key,unset_key,find_dotenv,dotenv_values
+-if IS_TYPE_CHECKING:from typing import Any,Optional
+-def load_ipython_extension(ipython):from .ipython import load_ipython_extension as A;A(ipython)
+-def get_cli_string(path=_A,action=_A,key=_A,value=_A,quote=_A):
+-	E=' ';D=quote;C=action;B=value;A=['dotenv']
+-	if D:A.append('-q %s'%D)
+-	if path:A.append('-f %s'%path)
+-	if C:
+-		A.append(C)
+-		if key:
+-			A.append(key)
+-			if B:
+-				if E in B:A.append('"%s"'%B)
+-				else:A.append(B)
+-	return E.join(A).strip()
+-__all__=['get_cli_string','load_dotenv','dotenv_values','get_key','set_key','unset_key','find_dotenv','load_ipython_extension']
+\ No newline at end of file
+diff --git a/dynaconf/vendor/dotenv/cli.py b/dynaconf/vendor/dotenv/cli.py
+deleted file mode 100644
+index 8599595..0000000
+--- a/dynaconf/vendor/dotenv/cli.py
++++ /dev/null
+@@ -1,56 +0,0 @@
+-_F='always'
+-_E='key'
+-_D='%s=%s'
+-_C='QUOTE'
+-_B='FILE'
+-_A=True
+-import os,sys
+-from subprocess import Popen
+-try:from dynaconf.vendor import click
+-except ImportError:sys.stderr.write('It seems python-dotenv is not installed with cli option. \nRun pip install "python-dotenv[cli]" to fix this.');sys.exit(1)
+-from .compat import IS_TYPE_CHECKING,to_env
+-from .main import dotenv_values,get_key,set_key,unset_key
+-from .version import __version__
+-if IS_TYPE_CHECKING:from typing import Any,List,Dict
+-@click.group()
+-@click.option('-f','--file',default=os.path.join(os.getcwd(),'.env'),type=click.Path(exists=_A),help='Location of the .env file, defaults to .env file in current working directory.')
+-@click.option('-q','--quote',default=_F,type=click.Choice([_F,'never','auto']),help='Whether to quote or not the variable values. Default mode is always. This does not affect parsing.')
+-@click.version_option(version=__version__)
+-@click.pass_context
+-def cli(ctx,file,quote):A=ctx;A.obj={};A.obj[_B]=file;A.obj[_C]=quote
+-@cli.command()
+-@click.pass_context
+-def list(ctx):
+-	A=ctx.obj[_B];B=dotenv_values(A)
+-	for (C,D) in B.items():click.echo(_D%(C,D))
+-@cli.command()
+-@click.pass_context
+-@click.argument(_E,required=_A)
+-@click.argument('value',required=_A)
+-def set(ctx,key,value):
+-	B=value;A=key;C=ctx.obj[_B];D=ctx.obj[_C];E,A,B=set_key(C,A,B,D)
+-	if E:click.echo(_D%(A,B))
+-	else:exit(1)
+-@cli.command()
+-@click.pass_context
+-@click.argument(_E,required=_A)
+-def get(ctx,key):
+-	B=ctx.obj[_B];A=get_key(B,key)
+-	if A:click.echo(_D%(key,A))
+-	else:exit(1)
+-@cli.command()
+-@click.pass_context
+-@click.argument(_E,required=_A)
+-def unset(ctx,key):
+-	A=key;B=ctx.obj[_B];C=ctx.obj[_C];D,A=unset_key(B,A,C)
+-	if D:click.echo('Successfully removed %s'%A)
+-	else:exit(1)
+-@cli.command(context_settings={'ignore_unknown_options':_A})
+-@click.pass_context
+-@click.argument('commandline',nargs=-1,type=click.UNPROCESSED)
+-def run(ctx,commandline):
+-	A=commandline;B=ctx.obj[_B];C={to_env(C):to_env(A)for(C,A)in dotenv_values(B).items()if A is not None}
+-	if not A:click.echo('No command given.');exit(1)
+-	D=run_command(A,C);exit(D)
+-def run_command(command,env):A=os.environ.copy();A.update(env);B=Popen(command,universal_newlines=_A,bufsize=0,shell=False,env=A);C,C=B.communicate();return B.returncode
+-if __name__=='__main__':cli()
+\ No newline at end of file
+diff --git a/dynaconf/vendor/dotenv/compat.py b/dynaconf/vendor/dotenv/compat.py
+deleted file mode 100644
+index 09aad2f..0000000
+--- a/dynaconf/vendor/dotenv/compat.py
++++ /dev/null
+@@ -1,18 +0,0 @@
+-_A='utf-8'
+-import sys
+-PY2=sys.version_info[0]==2
+-if PY2:from StringIO import StringIO
+-else:from io import StringIO
+-def is_type_checking():
+-	try:from typing import TYPE_CHECKING as A
+-	except ImportError:return False
+-	return A
+-IS_TYPE_CHECKING=is_type_checking()
+-if IS_TYPE_CHECKING:from typing import Text
+-def to_env(text):
+-	if PY2:return text.encode(sys.getfilesystemencoding()or _A)
+-	else:return text
+-def to_text(string):
+-	A=string
+-	if PY2:return A.decode(_A)
+-	else:return A
+\ No newline at end of file
+diff --git a/dynaconf/vendor/dotenv/ipython.py b/dynaconf/vendor/dotenv/ipython.py
+deleted file mode 100644
+index 47b92bc..0000000
+--- a/dynaconf/vendor/dotenv/ipython.py
++++ /dev/null
+@@ -1,18 +0,0 @@
+-from __future__ import print_function
+-_A='store_true'
+-from IPython.core.magic import Magics,line_magic,magics_class
+-from IPython.core.magic_arguments import argument,magic_arguments,parse_argstring
+-from .main import find_dotenv,load_dotenv
+-@magics_class
+-class IPythonDotEnv(Magics):
+-	@magic_arguments()
+-	@argument('-o','--override',action=_A,help='Indicate to override existing variables')
+-	@argument('-v','--verbose',action=_A,help='Indicate function calls to be verbose')
+-	@argument('dotenv_path',nargs='?',type=str,default='.env',help='Search in increasingly higher folders for the `dotenv_path`')
+-	@line_magic
+-	def dotenv(self,line):
+-		C=True;A=parse_argstring(self.dotenv,line);B=A.dotenv_path
+-		try:B=find_dotenv(B,C,C)
+-		except IOError:print('cannot find .env file');return
+-		load_dotenv(B,verbose=A.verbose,override=A.override)
+-def load_ipython_extension(ipython):ipython.register_magics(IPythonDotEnv)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/dotenv/main.py b/dynaconf/vendor/dotenv/main.py
+deleted file mode 100644
+index 343e298..0000000
+--- a/dynaconf/vendor/dotenv/main.py
++++ /dev/null
+@@ -1,114 +0,0 @@
+-from __future__ import absolute_import,print_function,unicode_literals
+-_E='.env'
+-_D='always'
+-_C=True
+-_B=False
+-_A=None
+-import io,logging,os,re,shutil,sys,tempfile
+-from collections import OrderedDict
+-from contextlib import contextmanager
+-from .compat import IS_TYPE_CHECKING,PY2,StringIO,to_env
+-from .parser import Binding,parse_stream
+-logger=logging.getLogger(__name__)
+-if IS_TYPE_CHECKING:
+-	from typing import Dict,Iterator,Match,Optional,Pattern,Union,Text,IO,Tuple
+-	if sys.version_info>=(3,6):_PathLike=os.PathLike
+-	else:_PathLike=Text
+-	if sys.version_info>=(3,0):_StringIO=StringIO
+-	else:_StringIO=StringIO[Text]
+-__posix_variable=re.compile('\n    \\$\\{\n        (?P<name>[^\\}:]*)\n        (?::-\n            (?P<default>[^\\}]*)\n        )?\n    \\}\n    ',re.VERBOSE)
+-def with_warn_for_invalid_lines(mappings):
+-	for A in mappings:
+-		if A.error:logger.warning('Python-dotenv could not parse statement starting at line %s',A.original.line)
+-		yield A
+-class DotEnv:
+-	def __init__(A,dotenv_path,verbose=_B,encoding=_A,interpolate=_C):A.dotenv_path=dotenv_path;A._dict=_A;A.verbose=verbose;A.encoding=encoding;A.interpolate=interpolate
+-	@contextmanager
+-	def _get_stream(self):
+-		A=self
+-		if isinstance(A.dotenv_path,StringIO):yield A.dotenv_path
+-		elif os.path.isfile(A.dotenv_path):
+-			with io.open(A.dotenv_path,encoding=A.encoding)as B:yield B
+-		else:
+-			if A.verbose:logger.info('Python-dotenv could not find configuration file %s.',A.dotenv_path or _E)
+-			yield StringIO('')
+-	def dict(A):
+-		if A._dict:return A._dict
+-		B=OrderedDict(A.parse());A._dict=resolve_nested_variables(B)if A.interpolate else B;return A._dict
+-	def parse(B):
+-		with B._get_stream()as C:
+-			for A in with_warn_for_invalid_lines(parse_stream(C)):
+-				if A.key is not _A:yield(A.key,A.value)
+-	def set_as_environment_variables(C,override=_B):
+-		for (A,B) in C.dict().items():
+-			if A in os.environ and not override:continue
+-			if B is not _A:os.environ[to_env(A)]=to_env(B)
+-		return _C
+-	def get(A,key):
+-		B=key;C=A.dict()
+-		if B in C:return C[B]
+-		if A.verbose:logger.warning('Key %s not found in %s.',B,A.dotenv_path)
+-		return _A
+-def get_key(dotenv_path,key_to_get):return DotEnv(dotenv_path,verbose=_C).get(key_to_get)
+-@contextmanager
+-def rewrite(path):
+-	try:
+-		with tempfile.NamedTemporaryFile(mode='w+',delete=_B)as A:
+-			with io.open(path)as B:yield(B,A)
+-	except BaseException:
+-		if os.path.isfile(A.name):os.unlink(A.name)
+-		raise
+-	else:shutil.move(A.name,path)
+-def set_key(dotenv_path,key_to_set,value_to_set,quote_mode=_D):
+-	K='"';E=quote_mode;C=dotenv_path;B=key_to_set;A=value_to_set;A=A.strip("'").strip(K)
+-	if not os.path.exists(C):logger.warning("Can't write to %s - it doesn't exist.",C);return _A,B,A
+-	if' 'in A:E=_D
+-	if E==_D:F='"{}"'.format(A.replace(K,'\\"'))
+-	else:F=A
+-	G='{}={}\n'.format(B,F)
+-	with rewrite(C)as(J,D):
+-		H=_B
+-		for I in with_warn_for_invalid_lines(parse_stream(J)):
+-			if I.key==B:D.write(G);H=_C
+-			else:D.write(I.original.string)
+-		if not H:D.write(G)
+-	return _C,B,A
+-def unset_key(dotenv_path,key_to_unset,quote_mode=_D):
+-	B=dotenv_path;A=key_to_unset
+-	if not os.path.exists(B):logger.warning("Can't delete from %s - it doesn't exist.",B);return _A,A
+-	C=_B
+-	with rewrite(B)as(E,F):
+-		for D in with_warn_for_invalid_lines(parse_stream(E)):
+-			if D.key==A:C=_C
+-			else:F.write(D.original.string)
+-	if not C:logger.warning("Key %s not removed from %s - key doesn't exist.",A,B);return _A,A
+-	return C,A
+-def resolve_nested_variables(values):
+-	def C(name,default):A=default;A=A if A is not _A else'';C=os.getenv(name,B.get(name,A));return C
+-	def D(match):A=match.groupdict();return C(name=A['name'],default=A['default'])
+-	B={}
+-	for (E,A) in values.items():B[E]=__posix_variable.sub(D,A)if A is not _A else _A
+-	return B
+-def _walk_to_root(path):
+-	A=path
+-	if not os.path.exists(A):raise IOError('Starting path not found')
+-	if os.path.isfile(A):A=os.path.dirname(A)
+-	C=_A;B=os.path.abspath(A)
+-	while C!=B:yield B;D=os.path.abspath(os.path.join(B,os.path.pardir));C,B=B,D
+-def find_dotenv(filename=_E,raise_error_if_not_found=_B,usecwd=_B):
+-	H='.py'
+-	def E():B='__file__';A=__import__('__main__',_A,_A,fromlist=[B]);return not hasattr(A,B)
+-	if usecwd or E()or getattr(sys,'frozen',_B):B=os.getcwd()
+-	else:
+-		A=sys._getframe()
+-		if PY2 and not __file__.endswith(H):C=__file__.rsplit('.',1)[0]+H
+-		else:C=__file__
+-		while A.f_code.co_filename==C:assert A.f_back is not _A;A=A.f_back
+-		F=A.f_code.co_filename;B=os.path.dirname(os.path.abspath(F))
+-	for G in _walk_to_root(B):
+-		D=os.path.join(G,filename)
+-		if os.path.isfile(D):return D
+-	if raise_error_if_not_found:raise IOError('File not found')
+-	return''
+-def load_dotenv(dotenv_path=_A,stream=_A,verbose=_B,override=_B,interpolate=_C,**A):B=dotenv_path or stream or find_dotenv();return DotEnv(B,verbose=verbose,interpolate=interpolate,**A).set_as_environment_variables(override=override)
+-def dotenv_values(dotenv_path=_A,stream=_A,verbose=_B,interpolate=_C,**A):B=dotenv_path or stream or find_dotenv();return DotEnv(B,verbose=verbose,interpolate=interpolate,**A).dict()
+\ No newline at end of file
+diff --git a/dynaconf/vendor/dotenv/parser.py b/dynaconf/vendor/dotenv/parser.py
+deleted file mode 100644
+index 65f4f31..0000000
+--- a/dynaconf/vendor/dotenv/parser.py
++++ /dev/null
+@@ -1,85 +0,0 @@
+-_I='error'
+-_H='original'
+-_G='value'
+-_F='key'
+-_E='Binding'
+-_D='line'
+-_C='string'
+-_B='Original'
+-_A=None
+-import codecs,re
+-from .compat import IS_TYPE_CHECKING,to_text
+-if IS_TYPE_CHECKING:from typing import IO,Iterator,Match,NamedTuple,Optional,Pattern,Sequence,Text,Tuple
+-def make_regex(string,extra_flags=0):return re.compile(to_text(string),re.UNICODE|extra_flags)
+-_newline=make_regex('(\\r\\n|\\n|\\r)')
+-_multiline_whitespace=make_regex('\\s*',extra_flags=re.MULTILINE)
+-_whitespace=make_regex('[^\\S\\r\\n]*')
+-_export=make_regex('(?:export[^\\S\\r\\n]+)?')
+-_single_quoted_key=make_regex("'([^']+)'")
+-_unquoted_key=make_regex('([^=\\#\\s]+)')
+-_equal_sign=make_regex('(=[^\\S\\r\\n]*)')
+-_single_quoted_value=make_regex("'((?:\\\\'|[^'])*)'")
+-_double_quoted_value=make_regex('"((?:\\\\"|[^"])*)"')
+-_unquoted_value_part=make_regex('([^ \\r\\n]*)')
+-_comment=make_regex('(?:[^\\S\\r\\n]*#[^\\r\\n]*)?')
+-_end_of_line=make_regex('[^\\S\\r\\n]*(?:\\r\\n|\\n|\\r|$)')
+-_rest_of_line=make_regex('[^\\r\\n]*(?:\\r|\\n|\\r\\n)?')
+-_double_quote_escapes=make_regex('\\\\[\\\\\'\\"abfnrtv]')
+-_single_quote_escapes=make_regex("\\\\[\\\\']")
+-try:import typing;Original=typing.NamedTuple(_B,[(_C,typing.Text),(_D,int)]);Binding=typing.NamedTuple(_E,[(_F,typing.Optional[typing.Text]),(_G,typing.Optional[typing.Text]),(_H,Original),(_I,bool)])
+-except ImportError:from collections import namedtuple;Original=namedtuple(_B,[_C,_D]);Binding=namedtuple(_E,[_F,_G,_H,_I])
+-class Position:
+-	def __init__(A,chars,line):A.chars=chars;A.line=line
+-	@classmethod
+-	def start(A):return A(chars=0,line=1)
+-	def set(A,other):B=other;A.chars=B.chars;A.line=B.line
+-	def advance(A,string):B=string;A.chars+=len(B);A.line+=len(re.findall(_newline,B))
+-class Error(Exception):0
+-class Reader:
+-	def __init__(A,stream):A.string=stream.read();A.position=Position.start();A.mark=Position.start()
+-	def has_next(A):return A.position.chars<len(A.string)
+-	def set_mark(A):A.mark.set(A.position)
+-	def get_marked(A):return Original(string=A.string[A.mark.chars:A.position.chars],line=A.mark.line)
+-	def peek(A,count):return A.string[A.position.chars:A.position.chars+count]
+-	def read(A,count):
+-		C=count;B=A.string[A.position.chars:A.position.chars+C]
+-		if len(B)<C:raise Error('read: End of string')
+-		A.position.advance(B);return B
+-	def read_regex(A,regex):
+-		B=regex.match(A.string,A.position.chars)
+-		if B is _A:raise Error('read_regex: Pattern not found')
+-		A.position.advance(A.string[B.start():B.end()]);return B.groups()
+-def decode_escapes(regex,string):
+-	def A(match):return codecs.decode(match.group(0),'unicode-escape')
+-	return regex.sub(A,string)
+-def parse_key(reader):
+-	A=reader;B=A.peek(1)
+-	if B=='#':return _A
+-	elif B=="'":C,=A.read_regex(_single_quoted_key)
+-	else:C,=A.read_regex(_unquoted_key)
+-	return C
+-def parse_unquoted_value(reader):
+-	A=reader;B=''
+-	while True:
+-		D,=A.read_regex(_unquoted_value_part);B+=D;C=A.peek(2)
+-		if len(C)<2 or C[0]in'\r\n'or C[1]in' #\r\n':return B
+-		B+=A.read(2)
+-def parse_value(reader):
+-	A=reader;B=A.peek(1)
+-	if B=="'":C,=A.read_regex(_single_quoted_value);return decode_escapes(_single_quote_escapes,C)
+-	elif B=='"':C,=A.read_regex(_double_quoted_value);return decode_escapes(_double_quote_escapes,C)
+-	elif B in('','\n','\r'):return''
+-	else:return parse_unquoted_value(A)
+-def parse_binding(reader):
+-	D=False;A=reader;A.set_mark()
+-	try:
+-		A.read_regex(_multiline_whitespace)
+-		if not A.has_next():return Binding(key=_A,value=_A,original=A.get_marked(),error=D)
+-		A.read_regex(_export);C=parse_key(A);A.read_regex(_whitespace)
+-		if A.peek(1)=='=':A.read_regex(_equal_sign);B=parse_value(A)
+-		else:B=_A
+-		A.read_regex(_comment);A.read_regex(_end_of_line);return Binding(key=C,value=B,original=A.get_marked(),error=D)
+-	except Error:A.read_regex(_rest_of_line);return Binding(key=_A,value=_A,original=A.get_marked(),error=True)
+-def parse_stream(stream):
+-	A=Reader(stream)
+-	while A.has_next():yield parse_binding(A)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/dotenv/py.typed b/dynaconf/vendor/dotenv/py.typed
+deleted file mode 100644
+index 7632ecf..0000000
+--- a/dynaconf/vendor/dotenv/py.typed
++++ /dev/null
+@@ -1 +0,0 @@
+-# Marker file for PEP 561
+diff --git a/dynaconf/vendor/dotenv/version.py b/dynaconf/vendor/dotenv/version.py
+deleted file mode 100644
+index 01d030a..0000000
+--- a/dynaconf/vendor/dotenv/version.py
++++ /dev/null
+@@ -1 +0,0 @@
+-__version__='0.13.0'
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/__init__.py b/dynaconf/vendor/ruamel/__init__.py
+deleted file mode 100644
+index e69de29..0000000
+diff --git a/dynaconf/vendor/ruamel/yaml/CHANGES b/dynaconf/vendor/ruamel/yaml/CHANGES
+deleted file mode 100644
+index a70a8ef..0000000
+--- a/dynaconf/vendor/ruamel/yaml/CHANGES
++++ /dev/null
+@@ -1,957 +0,0 @@
+-[0, 16, 10]: 2020-02-12
+-  - (auto) updated image references in README to sourceforge
+-
+-[0, 16, 9]: 2020-02-11
+-  - update CHANGES
+-
+-[0, 16, 8]: 2020-02-11
+-  - update requirements so that ruamel.yaml.clib is installed for 3.8,
+-    as it has become available (via manylinux builds)
+-
+-[0, 16, 7]: 2020-01-30
+-  - fix typchecking issue on TaggedScalar (reported by Jens Nielsen)
+-  - fix error in dumping literal scalar in sequence with comments before element
+-    (reported by `EJ Etherington <https://sourceforge.net/u/ejether/>`__)
+-
+-[0, 16, 6]: 2020-01-20
+-  - fix empty string mapping key roundtripping with preservation of quotes as `? ''`
+-    (reported via email by Tomer Aharoni).
+-  - fix incorrect state setting in class constructor (reported by `Douglas Raillard
+-    <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__)
+-  - adjust deprecation warning test for Hashable, as that no longer warns (reported
+-    by `Jason Montleon <https://bitbucket.org/%7B8f377d12-8d5b-4069-a662-00a2674fee4e%7D/>`__)
+-
+-[0, 16, 5]: 2019-08-18
+-  - allow for ``YAML(typ=['unsafe', 'pytypes'])``
+-
+-[0, 16, 4]: 2019-08-16
+-  - fix output of TAG directives with # (reported by `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-
+-[0, 16, 3]: 2019-08-15
+-  - move setting of version based on YAML directive to scanner, allowing to
+-    check for file version during TAG directive scanning
+-
+-[0, 16, 2]: 2019-08-15
+-  - preserve YAML and TAG directives on roundtrip, correctly output #
+-    in URL for YAML 1.2 (both reported by `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-[0, 16, 1]: 2019-08-08
+-  - Force the use of new version of ruamel.yaml.clib (reported by `Alex Joz
+-    <https://bitbucket.org/%7B9af55900-2534-4212-976c-61339b6ffe14%7D/>`__)
+-  - Allow '#' in tag URI as these are allowed in YAML 1.2 (reported by
+-    `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-[0, 16, 0]: 2019-07-25
+-  - split of C source that generates .so file to ruamel.yaml.clib
+-  - duplicate keys are now an error when working with the old API as well
+-
+-[0, 15, 100]: 2019-07-17
+-  - fixing issue with dumping deep-copied data from commented YAML, by
+-    providing both the memo parameter to __deepcopy__, and by allowing
+-    startmarks to be compared on their content (reported by `Theofilos
+-    Petsios
+-    <https://bitbucket.org/%7Be550bc5d-403d-4fda-820b-bebbe71796d3%7D/>`__)
+-
+-[0, 15, 99]: 2019-07-12
+-  - add `py.typed` to distribution, based on a PR submitted by
+-    `Michael Crusoe
+-    <https://bitbucket.org/%7Bc9fbde69-e746-48f5-900d-34992b7860c8%7D/>`__
+-  - merge PR 40 (also by Michael Crusoe) to more accurately specify
+-    repository in the README (also reported in a misunderstood issue
+-    some time ago)
+-
+-[0, 15, 98]: 2019-07-09
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.12, needed
+-    for Python 3.8.0b2 (reported by `John Vandenberg
+-    <https://bitbucket.org/%7B6d4e8487-3c97-4dab-a060-088ec50c682c%7D/>`__)
+-
+-[0, 15, 97]: 2019-06-06
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.10, needed for
+-    Python 3.8.0b1
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.9, needed for
+-    Python 3.8.0a4 (reported by `Anthony Sottile
+-    <https://bitbucket.org/%7B569cc8ea-0d9e-41cb-94a4-19ea517324df%7D/>`__)
+-
+-[0, 15, 96]: 2019-05-16
+-  - fix failure to indent comments on round-trip anchored block style
+-    scalars in block sequence (reported by `William Kimball
+-    <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-
+-[0, 15, 95]: 2019-05-16
+-  - fix failure to round-trip anchored scalars in block sequence
+-    (reported by `William Kimball
+-    <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-  - wheel files for Python 3.4 no longer provided (`Python 3.4 EOL 2019-03-18
+-    <https://www.python.org/dev/peps/pep-0429/>`__)
+-
+-[0, 15, 94]: 2019-04-23
+-  - fix missing line-break after end-of-file comments not ending in
+-    line-break (reported by `Philip Thompson
+-    <https://bitbucket.org/%7Be42ba205-0876-4151-bcbe-ccaea5bd13ce%7D/>`__)
+-
+-[0, 15, 93]: 2019-04-21
+-  - fix failure to parse empty implicit flow mapping key
+-  - in YAML 1.1 plains scalars `y`, 'n', `Y`, and 'N' are now
+-    correctly recognised as booleans and such strings dumped quoted
+-    (reported by `Marcel Bollmann
+-    <https://bitbucket.org/%7Bd8850921-9145-4ad0-ac30-64c3bd9b036d%7D/>`__)
+-
+-[0, 15, 92]: 2019-04-16
+-  - fix failure to parse empty implicit block mapping key (reported by 
+-    `Nolan W <https://bitbucket.org/i2labs/>`__)
+-
+-[0, 15, 91]: 2019-04-05
+-  - allowing duplicate keys would not work for merge keys (reported by mamacdon on
+-    `StackOverflow <https://stackoverflow.com/questions/55540686/>`__ 
+-
+-[0, 15, 90]: 2019-04-04
+-  - fix issue with updating `CommentedMap` from list of tuples (reported by 
+-    `Peter Henry <https://bitbucket.org/mosbasik/>`__)
+-
+-[0, 15, 89]: 2019-02-27
+-  - fix for items with flow-mapping in block sequence output on single line
+-    (reported by `Zahari Dim <https://bitbucket.org/zahari_dim/>`__)
+-  - fix for safe dumping erroring in creation of representereror when dumping namedtuple
+-    (reported and solution by `Jaakko Kantojärvi <https://bitbucket.org/raphendyr/>`__)
+-
+-[0, 15, 88]: 2019-02-12
+-  - fix inclusing of python code from the subpackage data (containing extra tests,
+-    reported by `Florian Apolloner <https://bitbucket.org/apollo13/>`__)
+-
+-[0, 15, 87]: 2019-01-22
+-  - fix problem with empty lists and the code to reinsert merge keys (reported via email 
+-    by Zaloo)
+-
+-[0, 15, 86]: 2019-01-16
+-  - reinsert merge key in its old position (reported by grumbler on
+-    <Stackoverflow <https://stackoverflow.com/a/54206512/1307905>`__)
+-  - fix for issue with non-ASCII anchor names (reported and fix
+-    provided by Dandaleon Flux via email)
+-  - fix for issue when parsing flow mapping value starting with colon (in pure Python only)
+-    (reported by `FichteFoll <https://bitbucket.org/FichteFoll/>`__)
+-
+-[0, 15, 85]: 2019-01-08
+-  - the types used by `SafeConstructor` for mappings and sequences can
+-    now by set by assigning to `XXXConstructor.yaml_base_dict_type`
+-    (and `..._list_type`), preventing the need to copy two methods
+-    with 50+ lines that had `var = {}` hardcoded.  (Implemented to
+-    help solve an feature request by `Anthony Sottile
+-    <https://bitbucket.org/asottile/>`__ in an easier way)
+-
+-[0, 15, 84]: 2019-01-07
+-  - fix for `CommentedMap.copy()` not returning `CommentedMap`, let alone copying comments etc.
+-    (reported by `Anthony Sottile <https://bitbucket.org/asottile/>`__)
+-
+-[0, 15, 83]: 2019-01-02
+-  - fix for bug in roundtripping aliases used as key (reported via email by Zaloo)
+-
+-[0, 15, 82]: 2018-12-28
+-  - anchors and aliases on scalar int, float, string and bool are now preserved. Anchors
+-    do not need a referring alias for these (reported by 
+-    `Alex Harvey <https://bitbucket.org/alexharv074/>`__)
+-  - anchors no longer lost on tagged objects when roundtripping (reported by `Zaloo 
+-    <https://bitbucket.org/zaloo/>`__)
+-
+-[0, 15, 81]: 2018-12-06
+- - fix issue saving methods of metaclass derived classes (reported and fix provided
+-   by `Douglas Raillard <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-[0, 15, 80]: 2018-11-26
+- - fix issue emitting BEL character when round-tripping invalid folded input
+-   (reported by Isaac on `StackOverflow <https://stackoverflow.com/a/53471217/1307905>`__)
+-    
+-[0, 15, 79]: 2018-11-21
+-  - fix issue with anchors nested deeper than alias (reported by gaFF on
+-    `StackOverflow <https://stackoverflow.com/a/53397781/1307905>`__)
+-
+-[0, 15, 78]: 2018-11-15
+-  - fix setup issue for 3.8 (reported by `Sidney Kuyateh 
+-    <https://bitbucket.org/autinerd/>`__)
+-
+-[0, 15, 77]: 2018-11-09
+-  - setting `yaml.sort_base_mapping_type_on_output = False`, will prevent
+-    explicit sorting by keys in the base representer of mappings. Roundtrip
+-    already did not do this. Usage only makes real sense for Python 3.6+
+-    (feature request by `Sebastian Gerber <https://bitbucket.org/spacemanspiff2007/>`__).
+-  - implement Python version check in YAML metadata in ``_test/test_z_data.py``
+-
+-[0, 15, 76]: 2018-11-01
+-  - fix issue with empty mapping and sequence loaded as flow-style
+-    (mapping reported by `Min RK <https://bitbucket.org/minrk/>`__, sequence
+-    by `Maged Ahmed <https://bitbucket.org/maged2/>`__)
+-
+-[0, 15, 75]: 2018-10-27
+-  - fix issue with single '?' scalar (reported by `Terrance 
+-    <https://bitbucket.org/OllieTerrance/>`__)
+-  - fix issue with duplicate merge keys (prompted by `answering 
+-    <https://stackoverflow.com/a/52852106/1307905>`__ a 
+-    `StackOverflow question <https://stackoverflow.com/q/52851168/1307905>`__
+-    by `math <https://stackoverflow.com/users/1355634/math>`__)
+-
+-[0, 15, 74]: 2018-10-17
+-  - fix dropping of comment on rt before sequence item that is sequence item
+-    (reported by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-
+-[0, 15, 73]: 2018-10-16
+-  - fix irregular output on pre-comment in sequence within sequence (reported
+-    by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-  - allow non-compact (i.e. next line) dumping sequence/mapping within sequence.
+-
+-[0, 15, 72]: 2018-10-06
+-  - fix regression on explicit 1.1 loading with the C based scanner/parser
+-    (reported by `Tomas Vavra <https://bitbucket.org/xtomik/>`__)
+-
+-[0, 15, 71]: 2018-09-26
+-  - fix regression where handcrafted CommentedMaps could not be initiated (reported by 
+-    `Dan Helfman <https://bitbucket.org/dhelfman/>`__)
+-  - fix regression with non-root literal scalars that needed indent indicator
+-    (reported by `Clark Breyman <https://bitbucket.org/clarkbreyman/>`__)
+-  - tag:yaml.org,2002:python/object/apply now also uses __qualname__ on PY3
+-    (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-[0, 15, 70]: 2018-09-21
+-  - reverted CommentedMap and CommentedSeq to subclass ordereddict resp. list,
+-    reimplemented merge maps so that both ``dict(**commented_map_instance)`` and JSON
+-    dumping works. This also allows checking with ``isinstance()`` on ``dict`` resp. ``list``.
+-    (Proposed by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__, with feedback
+-    from `blhsing <https://stackoverflow.com/users/6890912/blhsing>`__ on
+-    `StackOverflow <https://stackoverflow.com/q/52314186/1307905>`__)
+-
+-[0, 15, 69]: 2018-09-20
+-  - fix issue with dump_all gobbling end-of-document comments on parsing
+-    (reported by `Pierre B. <https://bitbucket.org/octplane/>`__)
+-
+-[0, 15, 68]: 2018-09-20
+-  - fix issue with parsabel, but incorrect output with nested flow-style sequences
+-    (reported by `Dougal Seeley <https://bitbucket.org/dseeley/>`__)
+-  - fix issue with loading Python objects that have __setstate__ and recursion in parameters
+-    (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-[0, 15, 67]: 2018-09-19
+-  - fix issue with extra space inserted with non-root literal strings 
+-    (Issue reported and PR with fix provided by 
+-    `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-
+-[0, 15, 66]: 2018-09-07
+-  - fix issue with fold indicating characters inserted in safe_load-ed folded strings
+-    (reported by `Maximilian Hils <https://bitbucket.org/mhils/>`__).
+-
+-[0, 15, 65]: 2018-09-07
+-  - fix issue #232 revert to throw ParserError for unexcpected ``]``
+-    and ``}`` instead of IndexError. (Issue reported and PR with fix
+-    provided by `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-  - added ``key`` and ``reverse`` parameter (suggested by Jannik Klemm via email)
+-  - indent root level literal scalars that have directive or document end markers
+-    at the beginning of a line
+-
+-[0, 15, 64]: 2018-08-30
+-  - support round-trip of tagged sequences: ``!Arg [a, {b: 1}]``
+-  - single entry mappings in flow sequences now written by default without quotes
+-    set ``yaml.brace_single_entry_mapping_in_flow_sequence=True`` to force
+-    getting ``[a, {b: 1}, {c: {d: 2}}]`` instead of the default ``[a, b: 1, c: {d: 2}]``
+-  - fix issue when roundtripping floats starting with a dot such as ``.5``
+-    (reported by `Harrison Gregg <https://bitbucket.org/HarrisonGregg/>`__)
+-
+-[0, 15, 63]: 2018-08-29
+-  - small fix only necessary for Windows users that don't use wheels.
+-
+-[0, 15, 62]: 2018-08-29
+-  - C based reader/scanner & emitter now allow setting of 1.2 as YAML version.
+-    ** The loading/dumping is still YAML 1.1 code**, so use the common subset of
+-    YAML 1.2 and 1.1 (reported by `Ge Yang <https://bitbucket.org/yangge/>`__)
+-
+-[0, 15, 61]: 2018-08-23
+-  - support for round-tripping folded style scalars (initially requested 
+-    by `Johnathan Viduchinsky <https://bitbucket.org/johnathanvidu/>`__)
+-  - update of C code
+-  - speed up of scanning (~30% depending on the input)
+-
+-[0, 15, 60]: 2018-08-18
+-  - cleanup for mypy 
+-  - spurious print in library (reported by 
+-    `Lele Gaifax <https://bitbucket.org/lele/>`__), now automatically checked 
+-
+-[0, 15, 59]: 2018-08-17
+-  - issue with C based loader and leading zeros (reported by 
+-    `Tom Hamilton Stubber <https://bitbucket.org/TomHamiltonStubber/>`__)
+-
+-[0, 15, 58]: 2018-08-17
+-  - simple mappings can now be used as keys when round-tripping::
+-
+-      {a: 1, b: 2}: hello world
+-      
+-    although using the obvious operations (del, popitem) on the key will
+-    fail, you can mutilate it by going through its attributes. If you load the
+-    above YAML in `d`, then changing the value is cumbersome:
+-
+-        d = {CommentedKeyMap([('a', 1), ('b', 2)]): "goodbye"}
+-
+-    and changing the key even more so:
+-
+-        d[CommentedKeyMap([('b', 1), ('a', 2)])] = d.pop(
+-                     CommentedKeyMap([('a', 1), ('b', 2)]))
+-
+-    (you can use a `dict` instead of a list of tuples (or ordereddict), but that might result
+-    in a different order, of the keys of the key, in the output)
+-  - check integers to dump with 1.2 patterns instead of 1.1 (reported by 
+-    `Lele Gaifax <https://bitbucket.org/lele/>`__)
+-  
+-
+-[0, 15, 57]: 2018-08-15
+-  - Fix that CommentedSeq could no longer be used in adding or do a copy
+-    (reported by `Christopher Wright <https://bitbucket.org/CJ-Wright4242/>`__)
+-
+-[0, 15, 56]: 2018-08-15
+-  - fix issue with ``python -O`` optimizing away code (reported, and detailed cause
+-    pinpointed, by `Alex Grönholm <https://bitbucket.org/agronholm/>`__
+-
+-[0, 15, 55]: 2018-08-14
+-
+-  - unmade ``CommentedSeq`` a subclass of ``list``. It is now
+-    indirectly a subclass of the standard
+-    ``collections.abc.MutableSequence`` (without .abc if you are
+-    still on Python2.7). If you do ``isinstance(yaml.load('[1, 2]'),
+-    list)``) anywhere in your code replace ``list`` with
+-    ``MutableSequence``.  Directly, ``CommentedSeq`` is a subclass of
+-    the abstract baseclass ``ruamel.yaml.compat.MutableScliceableSequence``,
+-    with the result that *(extended) slicing is supported on 
+-    ``CommentedSeq``*.
+-    (reported by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__)
+-  - duplicate keys (or their values) with non-ascii now correctly
+-    report in Python2, instead of raising a Unicode error.
+-    (Reported by `Jonathan Pyle <https://bitbucket.org/jonathan_pyle/>`__)
+-
+-[0, 15, 54]: 2018-08-13
+-
+-  - fix issue where a comment could pop-up twice in the output (reported by 
+-    `Mike Kazantsev <https://bitbucket.org/mk_fg/>`__ and by 
+-    `Nate Peterson <https://bitbucket.org/ndpete21/>`__)
+-  - fix issue where JSON object (mapping) without spaces was not parsed
+-    properly (reported by `Marc Schmidt <https://bitbucket.org/marcj/>`__)
+-  - fix issue where comments after empty flow-style mappings were not emitted
+-    (reported by `Qinfench Chen <https://bitbucket.org/flyin5ish/>`__)
+-
+-[0, 15, 53]: 2018-08-12
+-  - fix issue with flow style mapping with comments gobbled newline (reported
+-    by `Christopher Lambert <https://bitbucket.org/XN137/>`__)
+-  - fix issue where single '+' under YAML 1.2 was interpreted as
+-    integer, erroring out (reported by `Jethro Yu
+-    <https://bitbucket.org/jcppkkk/>`__)
+-
+-[0, 15, 52]: 2018-08-09
+-  - added `.copy()` mapping representation for round-tripping
+-    (``CommentedMap``) to fix incomplete copies of merged mappings
+-    (reported by `Will Richards
+-    <https://bitbucket.org/will_richards/>`__) 
+-  - Also unmade that class a subclass of ordereddict to solve incorrect behaviour
+-    for ``{**merged-mapping}`` and ``dict(**merged-mapping)`` (reported by
+-    `Filip Matzner <https://bitbucket.org/FloopCZ/>`__)
+-
+-[0, 15, 51]: 2018-08-08
+-  - Fix method name dumps (were not dotted) and loads (reported by `Douglas Raillard 
+-    <https://bitbucket.org/DouglasRaillard/>`__)
+-  - Fix spurious trailing white-space caused when the comment start
+-    column was no longer reached and there was no actual EOL comment
+-    (e.g. following empty line) and doing substitutions, or when
+-    quotes around scalars got dropped.  (reported by `Thomas Guillet
+-    <https://bitbucket.org/guillett/>`__)
+-
+-[0, 15, 50]: 2018-08-05
+-  - Allow ``YAML()`` as a context manager for output, thereby making it much easier
+-    to generate multi-documents in a stream. 
+-  - Fix issue with incorrect type information for `load()` and `dump()` (reported 
+-    by `Jimbo Jim <https://bitbucket.org/jimbo1qaz/>`__)
+-
+-[0, 15, 49]: 2018-08-05
+-  - fix preservation of leading newlines in root level literal style scalar,
+-    and preserve comment after literal style indicator (``|  # some comment``)
+-    Both needed for round-tripping multi-doc streams in 
+-    `ryd <https://pypi.org/project/ryd/>`__.
+-
+-[0, 15, 48]: 2018-08-03
+-  - housekeeping: ``oitnb`` for formatting, mypy 0.620 upgrade and conformity
+-
+-[0, 15, 47]: 2018-07-31
+-  - fix broken 3.6 manylinux1 (result of an unclean ``build`` (reported by 
+-    `Roman Sichnyi <https://bitbucket.org/rsichnyi-gl/>`__)
+-
+-
+-[0, 15, 46]: 2018-07-29
+-  - fixed DeprecationWarning for importing from ``collections`` on 3.7
+-    (issue 210, reported by `Reinoud Elhorst
+-    <https://bitbucket.org/reinhrst/>`__). It was `difficult to find
+-    why tox/pytest did not report
+-    <https://stackoverflow.com/q/51573204/1307905>`__ and as time
+-    consuming to actually `fix
+-    <https://stackoverflow.com/a/51573205/1307905>`__ the tests.
+-
+-[0, 15, 45]: 2018-07-26
+-  - After adding failing test for ``YAML.load_all(Path())``, remove StopIteration 
+-    (PR provided by `Zachary Buhman <https://bitbucket.org/buhman/>`__,
+-    also reported by `Steven Hiscocks <https://bitbucket.org/sdhiscocks/>`__.
+-
+-[0, 15, 44]: 2018-07-14
+-  - Correct loading plain scalars consisting of numerals only and
+-    starting with `0`, when not explicitly specifying YAML version
+-    1.1. This also fixes the issue about dumping string `'019'` as
+-    plain scalars as reported by `Min RK
+-    <https://bitbucket.org/minrk/>`__, that prompted this chance.
+-
+-[0, 15, 43]: 2018-07-12
+-  - merge PR33: Python2.7 on Windows is narrow, but has no
+-    ``sysconfig.get_config_var('Py_UNICODE_SIZE')``. (merge provided by
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__)
+-  - ``register_class()`` now returns class (proposed by
+-    `Mike Nerone <https://bitbucket.org/Manganeez/>`__}
+-
+-[0, 15, 42]: 2018-07-01
+-  - fix regression showing only on narrow Python 2.7 (py27mu) builds
+-    (with help from
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__ and
+-    `Colm O'Connor <>`__).
+-  - run pre-commit ``tox`` on Python 2.7 wide and narrow, as well as
+-    3.4/3.5/3.6/3.7/pypy
+-
+-[0, 15, 41]: 2018-06-27
+-  - add detection of C-compile failure (investigation prompted by 
+-    `StackOverlow <https://stackoverflow.com/a/51057399/1307905>`__ by 
+-    `Emmanuel Blot <https://stackoverflow.com/users/8233409/emmanuel-blot>`__),
+-    which was removed while no longer dependent on ``libyaml``, C-extensions
+-    compilation still needs a compiler though.
+-
+-[0, 15, 40]: 2018-06-18
+-  - added links to landing places as suggested in issue 190 by
+-    `KostisA <https://bitbucket.org/ankostis/>`__
+-  - fixes issue #201: decoding unicode escaped tags on Python2, reported
+-    by `Dan Abolafia <https://bitbucket.org/danabo/>`__
+-
+-[0, 15, 39]: 2018-06-16
+-  - merge PR27 improving package startup time (and loading when regexp not 
+-    actually used), provided by 
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__
+-
+-[0, 15, 38]: 2018-06-13
+-  - fix for losing precision when roundtripping floats by
+-    `Rolf Wojtech <https://bitbucket.org/asomov/>`__
+-  - fix for hardcoded dir separator not working for Windows by
+-    `Nuno André <https://bitbucket.org/nu_no/>`__
+-  - typo fix by `Andrey Somov <https://bitbucket.org/asomov/>`__
+-
+-[0, 15, 37]: 2018-03-21
+-  - again trying to create installable files for 187
+-
+-[0, 15, 36]: 2018-02-07
+-  - fix issue 187, incompatibility of C extension with 3.7 (reported by
+-    Daniel Blanchard)
+-
+-[0, 15, 35]: 2017-12-03
+-  - allow ``None`` as stream when specifying ``transform`` parameters to
+-    ``YAML.dump()``.
+-    This is useful if the transforming function doesn't return a meaningful value
+-    (inspired by `StackOverflow <https://stackoverflow.com/q/47614862/1307905>`__ by
+-    `rsaw <https://stackoverflow.com/users/406281/rsaw>`__).
+-
+-[0, 15, 34]: 2017-09-17
+-  - fix for issue 157: CDumper not dumping floats (reported by Jan Smitka)
+-
+-[0, 15, 33]: 2017-08-31
+-  - support for "undefined" round-tripping tagged scalar objects (in addition to
+-    tagged mapping object). Inspired by a use case presented by Matthew Patton
+-    on `StackOverflow <https://stackoverflow.com/a/45967047/1307905>`__.
+-  - fix issue 148: replace cryptic error message when using !!timestamp with an
+-    incorrectly formatted or non- scalar. Reported by FichteFoll.
+-
+-[0, 15, 32]: 2017-08-21
+-  - allow setting ``yaml.default_flow_style = None`` (default: ``False``) for
+-    for ``typ='rt'``.
+-  - fix for issue 149: multiplications on ``ScalarFloat`` now return ``float``
+-
+-[0, 15, 31]: 2017-08-15
+-  - fix Comment dumping
+-
+-[0, 15, 30]: 2017-08-14
+-  - fix for issue with "compact JSON" not parsing: ``{"in":{},"out":{}}``
+-    (reported on `StackOverflow <https://stackoverflow.com/q/45681626/1307905>`_ by
+-    `mjalkio <https://stackoverflow.com/users/5130525/mjalkio>`_
+-
+-[0, 15, 29]: 2017-08-14
+-  - fix issue #51: different indents for mappings and sequences (reported by 
+-    Alex Harvey)
+-  - fix for flow sequence/mapping as element/value of block sequence with 
+-    sequence-indent minus dash-offset not equal two.
+-
+-[0, 15, 28]: 2017-08-13
+-  - fix issue #61: merge of merge cannot be __repr__-ed (reported by Tal Liron)
+-
+-[0, 15, 27]: 2017-08-13
+-  - fix issue 62, YAML 1.2 allows ``?`` and ``:`` in plain scalars if non-ambigious
+-    (reported by nowox)
+-  - fix lists within lists which would make comments disappear
+-
+-[0, 15, 26]: 2017-08-10
+-  - fix for disappearing comment after empty flow sequence (reported by
+-    oit-tzhimmash)
+-
+-[0, 15, 25]: 2017-08-09
+-  - fix for problem with dumping (unloaded) floats (reported by eyenseo)
+-
+-[0, 15, 24]: 2017-08-09
+-  - added ScalarFloat which supports roundtripping of 23.1, 23.100,
+-    42.00E+56, 0.0, -0.0 etc. while keeping the format. Underscores in mantissas
+-    are not preserved/supported (yet, is anybody using that?).
+-  - (finally) fixed longstanding issue 23 (reported by `Antony Sottile
+-    <https://bitbucket.org/asottile/>`_), now handling comment between block
+-    mapping key and value correctly
+-  - warn on YAML 1.1 float input that is incorrect (triggered by invalid YAML
+-    provided by Cecil Curry)
+-  - allow setting of boolean representation (`false`, `true`) by using:
+-    ``yaml.boolean_representation = [u'False', u'True']``
+-
+-[0, 15, 23]: 2017-08-01
+-  - fix for round_tripping integers on 2.7.X > sys.maxint (reported by ccatterina)
+-
+-[0, 15, 22]: 2017-07-28
+-  - fix for round_tripping singe excl. mark tags doubling (reported and fix by Jan Brezina)
+-
+-[0, 15, 21]: 2017-07-25
+-  - fix for writing unicode in new API, https://stackoverflow.com/a/45281922/1307905
+-
+-[0, 15, 20]: 2017-07-23
+-  - wheels for windows including C extensions
+-
+-[0, 15, 19]: 2017-07-13
+-  - added object constructor for rt, decorator ``yaml_object`` to replace YAMLObject.
+-  - fix for problem using load_all with Path() instance
+-  - fix for load_all in combination with zero indent block style literal
+-    (``pure=True`` only!)
+-
+-[0, 15, 18]: 2017-07-04
+-  - missing ``pure`` attribute on ``YAML`` useful for implementing `!include` tag
+-    constructor for `including YAML files in a YAML file
+-    <https://stackoverflow.com/a/44913652/1307905>`_
+-  - some documentation improvements
+-  - trigger of doc build on new revision
+-
+-[0, 15, 17]: 2017-07-03
+-  - support for Unicode supplementary Plane **output** with allow_unicode
+-    (input was already supported, triggered by
+-    `this <https://stackoverflow.com/a/44875714/1307905>`_ Stack Overflow Q&A)
+-
+-[0, 15, 16]: 2017-07-01
+-  - minor typing issues (reported and fix provided by
+-    `Manvendra Singh <https://bitbucket.org/manu-chroma/>`_)
+-  - small doc improvements
+-
+-[0, 15, 15]: 2017-06-27
+-  - fix for issue 135, typ='safe' not dumping in Python 2.7
+-    (reported by Andrzej Ostrowski <https://bitbucket.org/aostr123/>`_)
+-
+-[0, 15, 14]: 2017-06-25
+-  - setup.py: change ModuleNotFoundError to ImportError (reported and fix by Asley Drake)
+-
+-[0, 15, 13]: 2017-06-24
+-  - suppress duplicate key warning on mappings with merge keys (reported by
+-    Cameron Sweeney)
+-
+-[0, 15, 12]: 2017-06-24
+-  - remove fatal dependency of setup.py on wheel package (reported by
+-    Cameron Sweeney)
+-
+-[0, 15, 11]: 2017-06-24
+-  - fix for issue 130, regression in nested merge keys (reported by
+-    `David Fee <https://bitbucket.org/dfee/>`_)
+-
+-[0, 15, 10]: 2017-06-23
+-  - top level PreservedScalarString not indented if not explicitly asked to
+-  - remove Makefile (not very useful anyway)
+-  - some mypy additions
+-
+-[0, 15, 9]: 2017-06-16
+-  - fix for issue 127: tagged scalars were always quoted and seperated
+-    by a newline when in a block sequence (reported and largely fixed by
+-    `Tommy Wang <https://bitbucket.org/twang817/>`_)
+-
+-[0, 15, 8]: 2017-06-15
+-  - allow plug-in install via ``install ruamel.yaml[jinja2]``
+-
+-[0, 15, 7]: 2017-06-14
+-  - add plug-in mechanism for load/dump pre resp. post-processing
+-
+-[0, 15, 6]: 2017-06-10
+-  - a set() with duplicate elements now throws error in rt loading
+-  - support for toplevel column zero literal/folded scalar in explicit documents
+-
+-[0, 15, 5]: 2017-06-08
+-  - repeat `load()` on a single `YAML()` instance would fail.
+-
+-(0, 15, 4) 2017-06-08: |
+-  - `transform` parameter on dump that expects a function taking a
+-    string and returning a string. This allows transformation of the output
+-    before it is written to stream.
+-  - some updates to the docs
+-
+-(0, 15, 3) 2017-06-07:
+-  - No longer try to compile C extensions on Windows. Compilation can be forced by setting
+-    the environment variable `RUAMEL_FORCE_EXT_BUILD` to some value
+-    before starting the `pip install`.
+-
+-(0, 15, 2) 2017-06-07:
+-  - update to conform to mypy 0.511:mypy --strict
+-
+-(0, 15, 1) 2017-06-07:
+-  - Any `duplicate keys  <http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys>`_
+-    in mappings generate an error (in the old API this change generates a warning until 0.16)
+-  - dependecy on ruamel.ordereddict for 2.7 now via extras_require
+-
+-(0, 15, 0) 2017-06-04:
+-  - it is now allowed to pass in a ``pathlib.Path`` as "stream" parameter to all
+-    load/dump functions
+-  - passing in a non-supported object (e.g. a string) as "stream" will result in a
+-    much more meaningful YAMLStreamError.
+-  - assigning a normal string value to an existing CommentedMap key or CommentedSeq
+-    element will result in a value cast to the previous value's type if possible.
+-
+-(0, 14, 12) 2017-05-14:
+-  - fix for issue 119, deepcopy not returning subclasses (reported and PR by
+-    Constantine Evans <cevans@evanslabs.org>)
+-
+-(0, 14, 11) 2017-05-01:
+-  - fix for issue 103 allowing implicit documents after document end marker line (``...``)
+-    in YAML 1.2
+-
+-(0, 14, 10) 2017-04-26:
+-  - fix problem with emitting using cyaml
+-
+-(0, 14, 9) 2017-04-22:
+-  - remove dependency on ``typing`` while still supporting ``mypy``
+-    (http://stackoverflow.com/a/43516781/1307905)
+-  - fix unclarity in doc that stated 2.6 is supported (reported by feetdust)
+-
+-(0, 14, 8) 2017-04-19:
+-  - fix Text not available on 3.5.0 and 3.5.1, now proactively setting version guards
+-    on all files (reported by `João Paulo Magalhães <https://bitbucket.org/jpmag/>`_)
+-
+-(0, 14, 7) 2017-04-18:
+-  - round trip of integers (decimal, octal, hex, binary) now preserve
+-    leading zero(s) padding and underscores. Underscores are presumed
+-    to be at regular distances (i.e. ``0o12_345_67`` dumps back as
+-    ``0o1_23_45_67`` as the space from the last digit to the
+-    underscore before that is the determining factor).
+-
+-(0, 14, 6) 2017-04-14:
+-  - binary, octal and hex integers are now preserved by default. This
+-    was a known deficiency. Working on this was prompted by the issue report (112)
+-    from devnoname120, as well as the additional experience with `.replace()`
+-    on `scalarstring` classes.
+-  - fix issues 114 cannot install on Buildozer (reported by mixmastamyk).
+-    Setting env. var ``RUAMEL_NO_PIP_INSTALL_CHECK`` will suppress ``pip``-check.
+-
+-(0, 14, 5) 2017-04-04:
+-  - fix issue 109 None not dumping correctly at top level (reported by Andrea Censi)
+-  - fix issue 110 .replace on Preserved/DoubleQuoted/SingleQuoted ScalarString
+-    would give back "normal" string (reported by sandres23)
+-
+-(0, 14, 4) 2017-03-31:
+-  - fix readme
+-
+-(0, 14, 3) 2017-03-31:
+-  - fix for 0o52 not being a string in YAML 1.1 (reported on
+-    `StackOverflow Q&A 43138503><http://stackoverflow.com/a/43138503/1307905>`_ by
+-    `Frank D <http://stackoverflow.com/users/7796630/frank-d>`_
+-
+-(0, 14, 2) 2017-03-23:
+-  - fix for old default pip on Ubuntu 14.04 (reported by Sébastien Maccagnoni-Munch)
+-
+-(0.14.1) 2017-03-22:
+-  - fix Text not available on 3.5.0 and 3.5.1 (reported by Charles Bouchard-Légaré)
+-
+-(0.14.0) 2017-03-21:
+-  - updates for mypy --strict
+-  - preparation for moving away from inheritance in Loader and Dumper, calls from e.g.
+-    the Representer to the Serializer.serialize() are now done via the attribute
+-    .serializer.serialize(). Usage of .serialize() outside of Serializer will be
+-    deprecated soon
+-  - some extra tests on main.py functions
+-
+-(0.13.14) 2017-02-12:
+-  - fix for issue 97, clipped block scalar followed by empty lines and comment
+-    would result in two CommentTokens of which the first was dropped.
+-    (reported by Colm O'Connor)
+-
+-(0.13.13) 2017-01-28:
+-  - fix for issue 96, prevent insertion of extra empty line if indented mapping entries
+-    are separated by an empty line (reported by Derrick Sawyer)
+-
+-(0.13.11) 2017-01-23:
+-  - allow ':' in flow style scalars if not followed by space. Also don't
+-    quote such scalar as this is no longer necessary.
+-  - add python 3.6 manylinux wheel to PyPI
+-
+-(0.13.10) 2017-01-22:
+-  - fix for issue 93, insert spurious blank line before single line comment
+-    between indented sequence elements (reported by Alex)
+-
+-(0.13.9) 2017-01-18:
+-  - fix for issue 92, wrong import name reported by the-corinthian
+-
+-(0.13.8) 2017-01-18:
+-  - fix for issue 91, when a compiler is unavailable reported by Maximilian Hils
+-  - fix for deepcopy issue with TimeStamps not preserving 'T', reported on
+-    `StackOverflow Q&A <http://stackoverflow.com/a/41577841/1307905>`_ by
+-    `Quuxplusone <http://stackoverflow.com/users/1424877/quuxplusone>`_
+-
+-(0.13.7) 2016-12-27:
+-  - fix for issue 85, constructor.py importing unicode_literals caused mypy to fail
+-    on 2.7 (reported by Peter Amstutz)
+-
+-(0.13.6) 2016-12-27:
+-  - fix for issue 83, collections.OrderedDict not representable by SafeRepresenter
+-    (reported by Frazer McLean)
+-
+-(0.13.5) 2016-12-25:
+-  - fix for issue 84, deepcopy not properly working (reported by Peter Amstutz)
+-
+-(0.13.4) 2016-12-05:
+-  - another fix for issue 82, change to non-global resolver data broke implicit type
+-    specification
+-
+-(0.13.3) 2016-12-05:
+-  - fix for issue 82, deepcopy not working (reported by code monk)
+-
+-(0.13.2) 2016-11-28:
+-  - fix for comments after empty (null) values  (reported by dsw2127 and cokelaer)
+-
+-(0.13.1) 2016-11-22:
+-  - optimisations on memory usage when loading YAML from large files (py3 -50%, py2 -85%)
+-
+-(0.13.0) 2016-11-20:
+-  - if ``load()`` or ``load_all()`` is called with only a single argument
+-    (stream or string)
+-    a UnsafeLoaderWarning will be issued once. If appropriate you can surpress this
+-    warning by filtering it. Explicitly supplying the ``Loader=ruamel.yaml.Loader``
+-    argument, will also prevent it from being issued. You should however consider
+-    using ``safe_load()``, ``safe_load_all()`` if your YAML input does not use tags.
+-  - allow adding comments before and after keys (based on
+-    `StackOveflow Q&A <http://stackoverflow.com/a/40705671/1307905>`_  by
+-    `msinn <http://stackoverflow.com/users/7185467/msinn>`_)
+-
+-(0.12.18) 2016-11-16:
+-  - another fix for numpy (re-reported independently by PaulG & Nathanial Burdic)
+-
+-(0.12.17) 2016-11-15:
+-  - only the RoundTripLoader included the Resolver that supports YAML 1.2
+-    now all loaders do (reported by mixmastamyk)
+-
+-(0.12.16) 2016-11-13:
+-  - allow dot char (and many others) in anchor name
+-    Fix issue 72 (reported by Shalon Wood)
+-  - |
+-    Slightly smarter behaviour dumping strings when no style is
+-    specified. Single string scalars that start with single quotes
+-    or have newlines now are dumped double quoted "'abc\nklm'" instead of
+-
+-      '''abc
+-
+-        klm'''
+-
+-(0.12.14) 2016-09-21:
+- - preserve round-trip sequences that are mapping keys
+-   (prompted by stackoverflow question 39595807 from Nowox)
+-
+-(0.12.13) 2016-09-15:
+- - Fix for issue #60 representation of CommentedMap with merge
+-   keys incorrect (reported by Tal Liron)
+-
+-(0.12.11) 2016-09-06:
+- - Fix issue 58 endless loop in scanning tokens (reported by
+-   Christopher Lambert)
+-
+-(0.12.10) 2016-09-05:
+- - Make previous fix depend on unicode char width (32 bit unicode support
+-   is a problem on MacOS reported by David Tagatac)
+-
+-(0.12.8) 2016-09-05:
+-   - To be ignored Unicode characters were not properly regex matched
+-     (no specific tests, PR by Haraguroicha Hsu)
+-
+-(0.12.7) 2016-09-03:
+-   - fixing issue 54 empty lines with spaces (reported by Alex Harvey)
+-
+-(0.12.6) 2016-09-03:
+-   - fixing issue 46 empty lines between top-level keys were gobbled (but
+-     not between sequence elements, nor between keys in netsted mappings
+-     (reported by Alex Harvey)
+-
+-(0.12.5) 2016-08-20:
+-  - fixing issue 45 preserving datetime formatting (submitted by altuin)
+-    Several formatting parameters are preserved with some normalisation:
+-  - preserve 'T', 't' is replaced by 'T', multiple spaces between date
+-    and time reduced to one.
+-  - optional space before timezone is removed
+-  - still using microseconds, but now rounded (.1234567 -> .123457)
+-  - Z/-5/+01:00 preserved
+-
+-(0.12.4) 2016-08-19:
+-  - Fix for issue 44: missing preserve_quotes keyword argument (reported
+-    by M. Crusoe)
+-
+-(0.12.3) 2016-08-17:
+-  - correct 'in' operation for merged CommentedMaps in round-trip mode
+-    (implementation inspired by J.Ngo, but original not working for merges)
+-  - iteration over round-trip loaded mappings, that contain merges. Also
+-    keys(), items(), values() (Py3/Py2) and iterkeys(), iteritems(),
+-    itervalues(), viewkeys(), viewitems(), viewvalues() (Py2)
+-  - reuse of anchor name now generates warning, not an error. Round-tripping such
+-    anchors works correctly. This inherited PyYAML issue was brought to attention
+-    by G. Coddut (and was long standing https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=515634)
+-    suppressing the warning::
+-
+-        import warnings
+-        from ruamel.yaml.error import ReusedAnchorWarning
+-        warnings.simplefilter("ignore", ReusedAnchorWarning)
+-
+-(0.12.2) 2016-08-16:
+-  - minor improvements based on feedback from M. Crusoe
+-    https://bitbucket.org/ruamel/yaml/issues/42/
+-
+-(0.12.0) 2016-08-16:
+-  - drop support for Python 2.6
+-  - include initial Type information (inspired by M. Crusoe)
+-
+-(0.11.15) 2016-08-07:
+-  - Change to prevent FutureWarning in NumPy, as reported by tgehring
+-    ("comparison to None will result in an elementwise object comparison in the future")
+-
+-(0.11.14) 2016-07-06:
+-  - fix preserve_quotes missing on original Loaders (as reported
+-    by Leynos, bitbucket issue 38)
+-
+-(0.11.13) 2016-07-06:
+-  - documentation only, automated linux wheels
+-
+-(0.11.12) 2016-07-06:
+-  - added support for roundtrip of single/double quoted scalars using:
+-    ruamel.yaml.round_trip_load(stream, preserve_quotes=True)
+-
+-(0.11.10) 2016-05-02:
+-
+-- added .insert(pos, key, value, comment=None) to CommentedMap
+-
+-(0.11.10) 2016-04-19:
+-
+-- indent=2, block_seq_indent=2 works as expected
+-
+-(0.11.0) 2016-02-18:
+-  - RoundTripLoader loads 1.2 by default (no sexagesimals, 012 octals nor
+-    yes/no/on/off booleans
+-
+-(0.10.11) 2015-09-17:
+-- Fix issue 13: dependency on libyaml to be installed for yaml.h
+-
+-(0.10.10) 2015-09-15:
+-- Python 3.5 tested with tox
+-- pypy full test (old PyYAML tests failed on too many open file handles)
+-
+-(0.10.6-0.10.9) 2015-09-14:
+-- Fix for issue 9
+-- Fix for issue 11: double dump losing comments
+-- Include libyaml code
+-- move code from 'py' subdir for proper namespace packaging.
+-
+-(0.10.5) 2015-08-25:
+-- preservation of newlines after block scalars. Contributed by Sam Thursfield.
+-
+-(0.10) 2015-06-22:
+-- preservation of hand crafted anchor names ( not of the form "idNNN")
+-- preservation of map merges ( <<< )
+-
+-(0.9) 2015-04-18:
+-- collections read in by the RoundTripLoader now have a ``lc`` property
+-  that can be quired for line and column ( ``lc.line`` resp. ``lc.col``)
+-
+-(0.8) 2015-04-15:
+-- bug fix for non-roundtrip save of ordereddict
+-- adding/replacing end of line comments on block style mappings/sequences
+-
+-(0.7.2) 2015-03-29:
+-- support for end-of-line comments on flow style sequences and mappings
+-
+-(0.7.1) 2015-03-27:
+-- RoundTrip capability of flow style sequences ( 'a: b, c, d' )
+-
+-(0.7) 2015-03-26:
+-- tests (currently failing) for inline sequece and non-standard spacing between
+-  block sequence dash and scalar (Anthony Sottile)
+-- initial possibility (on list, i.e. CommentedSeq) to set the flow format
+-  explicitly
+-- RoundTrip capability of flow style sequences ( 'a: b, c, d' )
+-
+-(0.6.1) 2015-03-15:
+-- setup.py changed so ruamel.ordereddict no longer is a dependency
+-  if not on CPython 2.x (used to test only for 2.x, which breaks pypy 2.5.0
+-  reported by Anthony Sottile)
+-
+-(0.6) 2015-03-11:
+-- basic support for scalars with preserved newlines
+-- html option for yaml command
+-- check if yaml C library is available before trying to compile C extension
+-- include unreleased change in PyYAML dd 20141128
+-
+-(0.5) 2015-01-14:
+-- move configobj -> YAML generator to own module
+-- added dependency on ruamel.base (based on feedback from  Sess
+-  <leycec@gmail.com>
+-
+-(0.4) 20141125:
+-- move comment classes in own module comments
+-- fix omap pre comment
+-- make !!omap and !!set take parameters. There are still some restrictions:
+-  - no comments before the !!tag
+-- extra tests
+-
+-(0.3) 20141124:
+-- fix value comment occuring as on previous line (looking like eol comment)
+-- INI conversion in yaml + tests
+-- (hidden) test in yaml for debugging with auto command
+-- fix for missing comment in middel of simple map + test
+-
+-(0.2) 20141123:
+-- add ext/_yaml.c etc to the source tree
+-- tests for yaml to work on 2.6/3.3/3.4
+-- change install so that you can include ruamel.yaml instead of ruamel.yaml.py
+-- add "yaml" utility with initial subcommands (test rt, from json)
+-
+-(0.1) 20141122:
+-- merge py2 and py3 code bases
+-- remove support for 2.5/3.0/3.1/3.2 (this merge relies on u"" as
+-  available in 3.3 and . imports not available in 2.5)
+-- tox.ini for 2.7/3.4/2.6/3.3
+-- remove lib3/ and tests/lib3 directories and content
+-- commit
+-- correct --verbose for test application
+-- DATA=changed to be relative to __file__ of code
+-- DATA using os.sep
+-- remove os.path from imports as os is already imported
+-- have test_yaml.py exit with value 0 on success, 1 on failures, 2 on
+-  error
+-- added support for octal integers starting with '0o'
+-  keep support for 01234 as well as 0o1234
+-- commit
+-- added test_roundtrip_data:
+-  requirest a .data file and .roundtrip (empty), yaml_load .data
+-  and compare dump against original.
+-- fix grammar as per David Pursehouse:
+-  https://bitbucket.org/xi/pyyaml/pull-request/5/fix-grammar-in-error-messages/diff
+-- http://www.json.org/ extra escaped char \/
+-  add .skip-ext as libyaml is not updated
+-- David Fraser: Extract a method to represent keys in mappings, so that
+-  a subclass can choose not to quote them, used in repesent_mapping
+-  https://bitbucket.org/davidfraser/pyyaml/
+-- add CommentToken and percolate through parser and composer and constructor
+-- add Comments to wrapped mapping and sequence constructs (not to scalars)
+-- generate YAML with comments
+-- initial README
+diff --git a/dynaconf/vendor/ruamel/yaml/LICENSE b/dynaconf/vendor/ruamel/yaml/LICENSE
+deleted file mode 100644
+index 5b863d3..0000000
+--- a/dynaconf/vendor/ruamel/yaml/LICENSE
++++ /dev/null
+@@ -1,21 +0,0 @@
+- The MIT License (MIT)
+-
+- Copyright (c) 2014-2020 Anthon van der Neut, Ruamel bvba
+-
+- Permission is hereby granted, free of charge, to any person obtaining a copy
+- of this software and associated documentation files (the "Software"), to deal
+- in the Software without restriction, including without limitation the rights
+- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+- copies of the Software, and to permit persons to whom the Software is
+- furnished to do so, subject to the following conditions:
+-
+- The above copyright notice and this permission notice shall be included in
+- all copies or substantial portions of the Software.
+-
+- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+- SOFTWARE.
+diff --git a/dynaconf/vendor/ruamel/yaml/MANIFEST.in b/dynaconf/vendor/ruamel/yaml/MANIFEST.in
+deleted file mode 100644
+index 1aa7798..0000000
+--- a/dynaconf/vendor/ruamel/yaml/MANIFEST.in
++++ /dev/null
+@@ -1,3 +0,0 @@
+-include README.rst LICENSE CHANGES setup.py
+-prune ext*
+-prune clib*
+diff --git a/dynaconf/vendor/ruamel/yaml/PKG-INFO b/dynaconf/vendor/ruamel/yaml/PKG-INFO
+deleted file mode 100644
+index b0ce985..0000000
+--- a/dynaconf/vendor/ruamel/yaml/PKG-INFO
++++ /dev/null
+@@ -1,782 +0,0 @@
+-Metadata-Version: 2.1
+-Name: ruamel.yaml
+-Version: 0.16.10
+-Summary: ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order
+-Home-page: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree
+-Author: Anthon van der Neut
+-Author-email: a.van.der.neut@ruamel.eu
+-License: MIT license
+-Description: 
+-        ruamel.yaml
+-        ===========
+-        
+-        ``ruamel.yaml`` is a YAML 1.2 loader/dumper package for Python.
+-        
+-        :version:       0.16.10
+-        :updated:       2020-02-12
+-        :documentation: http://yaml.readthedocs.io
+-        :repository:    https://bitbucket.org/ruamel/yaml
+-        :pypi:          https://pypi.org/project/ruamel.yaml/
+-        
+-        
+-        Starting with version 0.15.0 the way YAML files are loaded and dumped
+-        is changing. See the API doc for details.  Currently existing
+-        functionality will throw a warning before being changed/removed.
+-        **For production systems you should pin the version being used with
+-        ``ruamel.yaml<=0.15``**. There might be bug fixes in the 0.14 series,
+-        but new functionality is likely only to be available via the new API.
+-        
+-        If your package uses ``ruamel.yaml`` and is not listed on PyPI, drop
+-        me an email, preferably with some information on how you use the
+-        package (or a link to bitbucket/github) and I'll keep you informed
+-        when the status of the API is stable enough to make the transition.
+-        
+-        * `Overview <http://yaml.readthedocs.org/en/latest/overview.html>`_
+-        * `Installing <http://yaml.readthedocs.org/en/latest/install.html>`_
+-        * `Basic Usage <http://yaml.readthedocs.org/en/latest/basicuse.html>`_
+-        * `Details <http://yaml.readthedocs.org/en/latest/detail.html>`_
+-        * `Examples <http://yaml.readthedocs.org/en/latest/example.html>`_
+-        * `API <http://yaml.readthedocs.org/en/latest/api.html>`_
+-        * `Differences with PyYAML <http://yaml.readthedocs.org/en/latest/pyyaml.html>`_
+-        
+-        .. image:: https://readthedocs.org/projects/yaml/badge/?version=stable
+-           :target: https://yaml.readthedocs.org/en/stable
+-        
+-        .. image:: https://bestpractices.coreinfrastructure.org/projects/1128/badge
+-           :target: https://bestpractices.coreinfrastructure.org/projects/1128
+-        
+-        .. image:: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/_doc/_static/license.svg?format=raw
+-           :target: https://opensource.org/licenses/MIT
+-        
+-        .. image:: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/_doc/_static/pypi.svg?format=raw
+-           :target: https://pypi.org/project/ruamel.yaml/
+-        
+-        .. image:: https://sourceforge.net/p/oitnb/code/ci/default/tree/_doc/_static/oitnb.svg?format=raw
+-           :target: https://pypi.org/project/oitnb/
+-        
+-        .. image:: http://www.mypy-lang.org/static/mypy_badge.svg
+-           :target: http://mypy-lang.org/
+-        
+-        ChangeLog
+-        =========
+-        
+-        .. should insert NEXT: at the beginning of line for next key (with empty line)
+-        
+-        0.16.10 (2020-02-12):
+-          - (auto) updated image references in README to sourceforge
+-        
+-        0.16.9 (2020-02-11):
+-          - update CHANGES
+-        
+-        0.16.8 (2020-02-11):
+-          - update requirements so that ruamel.yaml.clib is installed for 3.8,
+-            as it has become available (via manylinux builds)
+-        
+-        0.16.7 (2020-01-30):
+-          - fix typchecking issue on TaggedScalar (reported by Jens Nielsen)
+-          - fix error in dumping literal scalar in sequence with comments before element
+-            (reported by `EJ Etherington <https://sourceforge.net/u/ejether/>`__)
+-        
+-        0.16.6 (2020-01-20):
+-          - fix empty string mapping key roundtripping with preservation of quotes as `? ''`
+-            (reported via email by Tomer Aharoni).
+-          - fix incorrect state setting in class constructor (reported by `Douglas Raillard
+-            <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__)
+-          - adjust deprecation warning test for Hashable, as that no longer warns (reported
+-            by `Jason Montleon <https://bitbucket.org/%7B8f377d12-8d5b-4069-a662-00a2674fee4e%7D/>`__)
+-        
+-        0.16.5 (2019-08-18):
+-          - allow for ``YAML(typ=['unsafe', 'pytypes'])``
+-        
+-        0.16.4 (2019-08-16):
+-          - fix output of TAG directives with # (reported by `Thomas Smith
+-            <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-        
+-        
+-        0.16.3 (2019-08-15):
+-          - split construct_object
+-          - change stuff back to keep mypy happy
+-          - move setting of version based on YAML directive to scanner, allowing to
+-            check for file version during TAG directive scanning
+-        
+-        0.16.2 (2019-08-15):
+-          - preserve YAML and TAG directives on roundtrip, correctly output #
+-            in URL for YAML 1.2 (both reported by `Thomas Smith
+-            <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-        
+-        0.16.1 (2019-08-08):
+-          - Force the use of new version of ruamel.yaml.clib (reported by `Alex Joz
+-            <https://bitbucket.org/%7B9af55900-2534-4212-976c-61339b6ffe14%7D/>`__)
+-          - Allow '#' in tag URI as these are allowed in YAML 1.2 (reported by
+-            `Thomas Smith
+-            <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-        
+-        0.16.0 (2019-07-25):
+-          - split of C source that generates .so file to ruamel.yaml.clib
+-          - duplicate keys are now an error when working with the old API as well
+-        
+-        0.15.100 (2019-07-17):
+-          - fixing issue with dumping deep-copied data from commented YAML, by
+-            providing both the memo parameter to __deepcopy__, and by allowing
+-            startmarks to be compared on their content (reported by `Theofilos
+-            Petsios
+-            <https://bitbucket.org/%7Be550bc5d-403d-4fda-820b-bebbe71796d3%7D/>`__)
+-        
+-        0.15.99 (2019-07-12):
+-          - add `py.typed` to distribution, based on a PR submitted by
+-            `Michael Crusoe
+-            <https://bitbucket.org/%7Bc9fbde69-e746-48f5-900d-34992b7860c8%7D/>`__
+-          - merge PR 40 (also by Michael Crusoe) to more accurately specify
+-            repository in the README (also reported in a misunderstood issue
+-            some time ago)
+-        
+-        0.15.98 (2019-07-09):
+-          - regenerate ext/_ruamel_yaml.c with Cython version 0.29.12, needed
+-            for Python 3.8.0b2 (reported by `John Vandenberg
+-            <https://bitbucket.org/%7B6d4e8487-3c97-4dab-a060-088ec50c682c%7D/>`__)
+-        
+-        0.15.97 (2019-06-06):
+-          - regenerate ext/_ruamel_yaml.c with Cython version 0.29.10, needed for
+-            Python 3.8.0b1
+-          - regenerate ext/_ruamel_yaml.c with Cython version 0.29.9, needed for
+-            Python 3.8.0a4 (reported by `Anthony Sottile
+-            <https://bitbucket.org/%7B569cc8ea-0d9e-41cb-94a4-19ea517324df%7D/>`__)
+-        
+-        0.15.96 (2019-05-16):
+-          - fix failure to indent comments on round-trip anchored block style
+-            scalars in block sequence (reported by `William Kimball
+-            <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-        
+-        0.15.95 (2019-05-16):
+-          - fix failure to round-trip anchored scalars in block sequence
+-            (reported by `William Kimball
+-            <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-          - wheel files for Python 3.4 no longer provided (`Python 3.4 EOL 2019-03-18
+-            <https://www.python.org/dev/peps/pep-0429/>`__)
+-        
+-        0.15.94 (2019-04-23):
+-          - fix missing line-break after end-of-file comments not ending in
+-            line-break (reported by `Philip Thompson
+-            <https://bitbucket.org/%7Be42ba205-0876-4151-bcbe-ccaea5bd13ce%7D/>`__)
+-        
+-        0.15.93 (2019-04-21):
+-          - fix failure to parse empty implicit flow mapping key
+-          - in YAML 1.1 plains scalars `y`, 'n', `Y`, and 'N' are now
+-            correctly recognised as booleans and such strings dumped quoted
+-            (reported by `Marcel Bollmann
+-            <https://bitbucket.org/%7Bd8850921-9145-4ad0-ac30-64c3bd9b036d%7D/>`__)
+-        
+-        0.15.92 (2019-04-16):
+-          - fix failure to parse empty implicit block mapping key (reported by 
+-            `Nolan W <https://bitbucket.org/i2labs/>`__)
+-        
+-        0.15.91 (2019-04-05):
+-          - allowing duplicate keys would not work for merge keys (reported by mamacdon on
+-            `StackOverflow <https://stackoverflow.com/questions/55540686/>`__ 
+-        
+-        0.15.90 (2019-04-04):
+-          - fix issue with updating `CommentedMap` from list of tuples (reported by 
+-            `Peter Henry <https://bitbucket.org/mosbasik/>`__)
+-        
+-        0.15.89 (2019-02-27):
+-          - fix for items with flow-mapping in block sequence output on single line
+-            (reported by `Zahari Dim <https://bitbucket.org/zahari_dim/>`__)
+-          - fix for safe dumping erroring in creation of representereror when dumping namedtuple
+-            (reported and solution by `Jaakko Kantojärvi <https://bitbucket.org/raphendyr/>`__)
+-        
+-        0.15.88 (2019-02-12):
+-          - fix inclusing of python code from the subpackage data (containing extra tests,
+-            reported by `Florian Apolloner <https://bitbucket.org/apollo13/>`__)
+-        
+-        0.15.87 (2019-01-22):
+-          - fix problem with empty lists and the code to reinsert merge keys (reported via email 
+-             by Zaloo)
+-        
+-        0.15.86 (2019-01-16):
+-          - reinsert merge key in its old position (reported by grumbler on
+-            `StackOverflow <https://stackoverflow.com/a/54206512/1307905>`__)
+-          - fix for issue with non-ASCII anchor names (reported and fix
+-            provided by Dandaleon Flux via email)
+-          - fix for issue when parsing flow mapping value starting with colon (in pure Python only)
+-            (reported by `FichteFoll <https://bitbucket.org/FichteFoll/>`__)
+-        
+-        0.15.85 (2019-01-08):
+-          - the types used by ``SafeConstructor`` for mappings and sequences can
+-            now by set by assigning to ``XXXConstructor.yaml_base_dict_type``
+-            (and ``..._list_type``), preventing the need to copy two methods
+-            with 50+ lines that had ``var = {}`` hardcoded.  (Implemented to
+-            help solve an feature request by `Anthony Sottile
+-            <https://bitbucket.org/asottile/>`__ in an easier way)
+-        
+-        0.15.84 (2019-01-07):
+-          - fix for ``CommentedMap.copy()`` not returning ``CommentedMap``, let alone copying comments etc.
+-            (reported by `Anthony Sottile <https://bitbucket.org/asottile/>`__)
+-        
+-        0.15.83 (2019-01-02):
+-          - fix for bug in roundtripping aliases used as key (reported via email by Zaloo)
+-        
+-        0.15.82 (2018-12-28):
+-          - anchors and aliases on scalar int, float, string and bool are now preserved. Anchors
+-            do not need a referring alias for these (reported by 
+-            `Alex Harvey <https://bitbucket.org/alexharv074/>`__)
+-          - anchors no longer lost on tagged objects when roundtripping (reported by `Zaloo 
+-            <https://bitbucket.org/zaloo/>`__)
+-        
+-        0.15.81 (2018-12-06):
+-          - fix issue dumping methods of metaclass derived classes (reported and fix provided
+-            by `Douglas Raillard <https://bitbucket.org/DouglasRaillard/>`__)
+-        
+-        0.15.80 (2018-11-26):
+-          - fix issue emitting BEL character when round-tripping invalid folded input
+-            (reported by Isaac on `StackOverflow <https://stackoverflow.com/a/53471217/1307905>`__)
+-            
+-        0.15.79 (2018-11-21):
+-          - fix issue with anchors nested deeper than alias (reported by gaFF on
+-            `StackOverflow <https://stackoverflow.com/a/53397781/1307905>`__)
+-        
+-        0.15.78 (2018-11-15):
+-          - fix setup issue for 3.8 (reported by `Sidney Kuyateh 
+-            <https://bitbucket.org/autinerd/>`__)
+-        
+-        0.15.77 (2018-11-09):
+-          - setting `yaml.sort_base_mapping_type_on_output = False`, will prevent
+-            explicit sorting by keys in the base representer of mappings. Roundtrip
+-            already did not do this. Usage only makes real sense for Python 3.6+
+-            (feature request by `Sebastian Gerber <https://bitbucket.org/spacemanspiff2007/>`__).
+-          - implement Python version check in YAML metadata in ``_test/test_z_data.py``
+-        
+-        0.15.76 (2018-11-01):
+-          - fix issue with empty mapping and sequence loaded as flow-style
+-            (mapping reported by `Min RK <https://bitbucket.org/minrk/>`__, sequence
+-            by `Maged Ahmed <https://bitbucket.org/maged2/>`__)
+-        
+-        0.15.75 (2018-10-27):
+-          - fix issue with single '?' scalar (reported by `Terrance 
+-            <https://bitbucket.org/OllieTerrance/>`__)
+-          - fix issue with duplicate merge keys (prompted by `answering 
+-            <https://stackoverflow.com/a/52852106/1307905>`__ a 
+-            `StackOverflow question <https://stackoverflow.com/q/52851168/1307905>`__
+-            by `math <https://stackoverflow.com/users/1355634/math>`__)
+-        
+-        0.15.74 (2018-10-17):
+-          - fix dropping of comment on rt before sequence item that is sequence item
+-            (reported by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-        
+-        0.15.73 (2018-10-16):
+-          - fix irregular output on pre-comment in sequence within sequence (reported
+-            by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-          - allow non-compact (i.e. next line) dumping sequence/mapping within sequence.
+-        
+-        0.15.72 (2018-10-06):
+-          - fix regression on explicit 1.1 loading with the C based scanner/parser
+-            (reported by `Tomas Vavra <https://bitbucket.org/xtomik/>`__)
+-        
+-        0.15.71 (2018-09-26):
+-          - some of the tests now live in YAML files in the 
+-            `yaml.data <https://bitbucket.org/ruamel/yaml.data>`__ repository. 
+-            ``_test/test_z_data.py`` processes these.
+-          - fix regression where handcrafted CommentedMaps could not be initiated (reported by 
+-            `Dan Helfman <https://bitbucket.org/dhelfman/>`__)
+-          - fix regression with non-root literal scalars that needed indent indicator
+-            (reported by `Clark Breyman <https://bitbucket.org/clarkbreyman/>`__)
+-          - tag:yaml.org,2002:python/object/apply now also uses __qualname__ on PY3
+-            (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-          - issue with self-referring object creation
+-            (reported and fix by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-        
+-        0.15.70 (2018-09-21):
+-          - reverted CommentedMap and CommentedSeq to subclass ordereddict resp. list,
+-            reimplemented merge maps so that both ``dict(**commented_map_instance)`` and JSON
+-            dumping works. This also allows checking with ``isinstance()`` on ``dict`` resp. ``list``.
+-            (Proposed by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__, with feedback
+-            from `blhsing <https://stackoverflow.com/users/6890912/blhsing>`__ on
+-            `StackOverflow <https://stackoverflow.com/q/52314186/1307905>`__)
+-        
+-        0.15.69 (2018-09-20):
+-          - fix issue with dump_all gobbling end-of-document comments on parsing
+-            (reported by `Pierre B. <https://bitbucket.org/octplane/>`__)
+-        
+-        0.15.68 (2018-09-20):
+-          - fix issue with parsabel, but incorrect output with nested flow-style sequences
+-            (reported by `Dougal Seeley <https://bitbucket.org/dseeley/>`__)
+-          - fix issue with loading Python objects that have __setstate__ and recursion in parameters
+-            (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-        
+-        0.15.67 (2018-09-19):
+-          - fix issue with extra space inserted with non-root literal strings 
+-            (Issue reported and PR with fix provided by 
+-            `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-        
+-        0.15.66 (2018-09-07):
+-          - fix issue with fold indicating characters inserted in safe_load-ed folded strings
+-            (reported by `Maximilian Hils <https://bitbucket.org/mhils/>`__).
+-        
+-        0.15.65 (2018-09-07):
+-          - fix issue #232 revert to throw ParserError for unexcpected ``]``
+-            and ``}`` instead of IndexError. (Issue reported and PR with fix
+-            provided by `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-          - added ``key`` and ``reverse`` parameter (suggested by Jannik Klemm via email)
+-          - indent root level literal scalars that have directive or document end markers
+-            at the beginning of a line
+-        
+-        0.15.64 (2018-08-30):
+-          - support round-trip of tagged sequences: ``!Arg [a, {b: 1}]``
+-          - single entry mappings in flow sequences now written by default without braces,
+-            set ``yaml.brace_single_entry_mapping_in_flow_sequence=True`` to force
+-            getting ``[a, {b: 1}, {c: {d: 2}}]`` instead of the default ``[a, b: 1, c: {d: 2}]``
+-          - fix issue when roundtripping floats starting with a dot such as ``.5``
+-            (reported by `Harrison Gregg <https://bitbucket.org/HarrisonGregg/>`__)
+-        
+-        0.15.63 (2018-08-29):
+-          - small fix only necessary for Windows users that don't use wheels.
+-        
+-        0.15.62 (2018-08-29):
+-          - C based reader/scanner & emitter now allow setting of 1.2 as YAML version.
+-            ** The loading/dumping is still YAML 1.1 code**, so use the common subset of
+-            YAML 1.2 and 1.1 (reported by `Ge Yang <https://bitbucket.org/yangge/>`__)
+-        
+-        0.15.61 (2018-08-23):
+-          - support for round-tripping folded style scalars (initially requested 
+-            by `Johnathan Viduchinsky <https://bitbucket.org/johnathanvidu/>`__)
+-          - update of C code
+-          - speed up of scanning (~30% depending on the input)
+-        
+-        0.15.60 (2018-08-18):
+-          - again allow single entry map in flow sequence context (reported by 
+-            `Lee Goolsbee <https://bitbucket.org/lgoolsbee/>`__)
+-          - cleanup for mypy 
+-          - spurious print in library (reported by 
+-            `Lele Gaifax <https://bitbucket.org/lele/>`__), now automatically checked 
+-        
+-        0.15.59 (2018-08-17):
+-          - issue with C based loader and leading zeros (reported by 
+-            `Tom Hamilton Stubber <https://bitbucket.org/TomHamiltonStubber/>`__)
+-        
+-        0.15.58 (2018-08-17):
+-          - simple mappings can now be used as keys when round-tripping::
+-        
+-              {a: 1, b: 2}: hello world
+-              
+-            although using the obvious operations (del, popitem) on the key will
+-            fail, you can mutilate it by going through its attributes. If you load the
+-            above YAML in `d`, then changing the value is cumbersome:
+-        
+-                d = {CommentedKeyMap([('a', 1), ('b', 2)]): "goodbye"}
+-        
+-            and changing the key even more so:
+-        
+-                d[CommentedKeyMap([('b', 1), ('a', 2)])] = d.pop(
+-                             CommentedKeyMap([('a', 1), ('b', 2)]))
+-        
+-            (you can use a `dict` instead of a list of tuples (or ordereddict), but that might result
+-            in a different order, of the keys of the key, in the output)
+-          - check integers to dump with 1.2 patterns instead of 1.1 (reported by 
+-            `Lele Gaifax <https://bitbucket.org/lele/>`__)
+-          
+-        
+-        0.15.57 (2018-08-15):
+-          - Fix that CommentedSeq could no longer be used in adding or do a sort
+-            (reported by `Christopher Wright <https://bitbucket.org/CJ-Wright4242/>`__)
+-        
+-        0.15.56 (2018-08-15):
+-          - fix issue with ``python -O`` optimizing away code (reported, and detailed cause
+-            pinpointed, by `Alex Grönholm <https://bitbucket.org/agronholm/>`__)
+-        
+-        0.15.55 (2018-08-14):
+-          - unmade ``CommentedSeq`` a subclass of ``list``. It is now
+-            indirectly a subclass of the standard
+-            ``collections.abc.MutableSequence`` (without .abc if you are
+-            still on Python2.7). If you do ``isinstance(yaml.load('[1, 2]'),
+-            list)``) anywhere in your code replace ``list`` with
+-            ``MutableSequence``.  Directly, ``CommentedSeq`` is a subclass of
+-            the abstract baseclass ``ruamel.yaml.compat.MutableScliceableSequence``,
+-            with the result that *(extended) slicing is supported on 
+-            ``CommentedSeq``*.
+-            (reported by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__)
+-          - duplicate keys (or their values) with non-ascii now correctly
+-            report in Python2, instead of raising a Unicode error.
+-            (Reported by `Jonathan Pyle <https://bitbucket.org/jonathan_pyle/>`__)
+-        
+-        0.15.54 (2018-08-13):
+-          - fix issue where a comment could pop-up twice in the output (reported by 
+-            `Mike Kazantsev <https://bitbucket.org/mk_fg/>`__ and by 
+-            `Nate Peterson <https://bitbucket.org/ndpete21/>`__)
+-          - fix issue where JSON object (mapping) without spaces was not parsed
+-            properly (reported by `Marc Schmidt <https://bitbucket.org/marcj/>`__)
+-          - fix issue where comments after empty flow-style mappings were not emitted
+-            (reported by `Qinfench Chen <https://bitbucket.org/flyin5ish/>`__)
+-        
+-        0.15.53 (2018-08-12):
+-          - fix issue with flow style mapping with comments gobbled newline (reported
+-            by `Christopher Lambert <https://bitbucket.org/XN137/>`__)
+-          - fix issue where single '+' under YAML 1.2 was interpreted as
+-            integer, erroring out (reported by `Jethro Yu
+-            <https://bitbucket.org/jcppkkk/>`__)
+-        
+-        0.15.52 (2018-08-09):
+-          - added `.copy()` mapping representation for round-tripping
+-            (``CommentedMap``) to fix incomplete copies of merged mappings
+-            (reported by `Will Richards
+-            <https://bitbucket.org/will_richards/>`__) 
+-          - Also unmade that class a subclass of ordereddict to solve incorrect behaviour
+-            for ``{**merged-mapping}`` and ``dict(**merged-mapping)`` (reported independently by
+-            `Tim Olsson <https://bitbucket.org/tgolsson/>`__ and 
+-            `Filip Matzner <https://bitbucket.org/FloopCZ/>`__)
+-        
+-        0.15.51 (2018-08-08):
+-          - Fix method name dumps (were not dotted) and loads (reported by `Douglas Raillard 
+-            <https://bitbucket.org/DouglasRaillard/>`__)
+-          - Fix spurious trailing white-space caused when the comment start
+-            column was no longer reached and there was no actual EOL comment
+-            (e.g. following empty line) and doing substitutions, or when
+-            quotes around scalars got dropped.  (reported by `Thomas Guillet
+-            <https://bitbucket.org/guillett/>`__)
+-        
+-        0.15.50 (2018-08-05):
+-          - Allow ``YAML()`` as a context manager for output, thereby making it much easier
+-            to generate multi-documents in a stream. 
+-          - Fix issue with incorrect type information for `load()` and `dump()` (reported 
+-            by `Jimbo Jim <https://bitbucket.org/jimbo1qaz/>`__)
+-        
+-        0.15.49 (2018-08-05):
+-          - fix preservation of leading newlines in root level literal style scalar,
+-            and preserve comment after literal style indicator (``|  # some comment``)
+-            Both needed for round-tripping multi-doc streams in 
+-            `ryd <https://pypi.org/project/ryd/>`__.
+-        
+-        0.15.48 (2018-08-03):
+-          - housekeeping: ``oitnb`` for formatting, mypy 0.620 upgrade and conformity
+-        
+-        0.15.47 (2018-07-31):
+-          - fix broken 3.6 manylinux1, the result of an unclean ``build`` (reported by 
+-            `Roman Sichnyi <https://bitbucket.org/rsichnyi-gl/>`__)
+-        
+-        
+-        0.15.46 (2018-07-29):
+-          - fixed DeprecationWarning for importing from ``collections`` on 3.7
+-            (issue 210, reported by `Reinoud Elhorst
+-            <https://bitbucket.org/reinhrst/>`__). It was `difficult to find
+-            why tox/pytest did not report
+-            <https://stackoverflow.com/q/51573204/1307905>`__ and as time
+-            consuming to actually `fix
+-            <https://stackoverflow.com/a/51573205/1307905>`__ the tests.
+-        
+-        0.15.45 (2018-07-26):
+-          - After adding failing test for ``YAML.load_all(Path())``, remove StopIteration 
+-            (PR provided by `Zachary Buhman <https://bitbucket.org/buhman/>`__,
+-            also reported by `Steven Hiscocks <https://bitbucket.org/sdhiscocks/>`__.
+-        
+-        0.15.44 (2018-07-14):
+-          - Correct loading plain scalars consisting of numerals only and
+-            starting with `0`, when not explicitly specifying YAML version
+-            1.1. This also fixes the issue about dumping string `'019'` as
+-            plain scalars as reported by `Min RK
+-            <https://bitbucket.org/minrk/>`__, that prompted this chance.
+-        
+-        0.15.43 (2018-07-12):
+-          - merge PR33: Python2.7 on Windows is narrow, but has no
+-            ``sysconfig.get_config_var('Py_UNICODE_SIZE')``. (merge provided by
+-            `Marcel Bargull <https://bitbucket.org/mbargull/>`__)
+-          - ``register_class()`` now returns class (proposed by
+-            `Mike Nerone <https://bitbucket.org/Manganeez/>`__}
+-        
+-        0.15.42 (2018-07-01):
+-          - fix regression showing only on narrow Python 2.7 (py27mu) builds
+-            (with help from
+-            `Marcel Bargull <https://bitbucket.org/mbargull/>`__ and
+-            `Colm O'Connor <https://bitbucket.org/colmoconnorgithub/>`__).
+-          - run pre-commit ``tox`` on Python 2.7 wide and narrow, as well as
+-            3.4/3.5/3.6/3.7/pypy
+-        
+-        0.15.41 (2018-06-27):
+-          - add detection of C-compile failure (investigation prompted by
+-            `StackOverlow <https://stackoverflow.com/a/51057399/1307905>`__ by
+-            `Emmanuel Blot <https://stackoverflow.com/users/8233409/emmanuel-blot>`__),
+-            which was removed while no longer dependent on ``libyaml``, C-extensions
+-            compilation still needs a compiler though.
+-        
+-        0.15.40 (2018-06-18):
+-          - added links to landing places as suggested in issue 190 by
+-            `KostisA <https://bitbucket.org/ankostis/>`__
+-          - fixes issue #201: decoding unicode escaped tags on Python2, reported
+-            by `Dan Abolafia <https://bitbucket.org/danabo/>`__
+-        
+-        0.15.39 (2018-06-17):
+-          - merge PR27 improving package startup time (and loading when regexp not
+-            actually used), provided by
+-            `Marcel Bargull <https://bitbucket.org/mbargull/>`__
+-        
+-        0.15.38 (2018-06-13):
+-          - fix for losing precision when roundtripping floats by
+-            `Rolf Wojtech <https://bitbucket.org/asomov/>`__
+-          - fix for hardcoded dir separator not working for Windows by
+-            `Nuno André <https://bitbucket.org/nu_no/>`__
+-          - typo fix by `Andrey Somov <https://bitbucket.org/asomov/>`__
+-        
+-        0.15.37 (2018-03-21):
+-          - again trying to create installable files for 187
+-        
+-        0.15.36 (2018-02-07):
+-          - fix issue 187, incompatibility of C extension with 3.7 (reported by
+-            Daniel Blanchard)
+-        
+-        0.15.35 (2017-12-03):
+-          - allow ``None`` as stream when specifying ``transform`` parameters to
+-            ``YAML.dump()``.
+-            This is useful if the transforming function doesn't return a meaningful value
+-            (inspired by `StackOverflow <https://stackoverflow.com/q/47614862/1307905>`__ by
+-            `rsaw <https://stackoverflow.com/users/406281/rsaw>`__).
+-        
+-        0.15.34 (2017-09-17):
+-          - fix for issue 157: CDumper not dumping floats (reported by Jan Smitka)
+-        
+-        0.15.33 (2017-08-31):
+-          - support for "undefined" round-tripping tagged scalar objects (in addition to
+-            tagged mapping object). Inspired by a use case presented by Matthew Patton
+-            on `StackOverflow <https://stackoverflow.com/a/45967047/1307905>`__.
+-          - fix issue 148: replace cryptic error message when using !!timestamp with an
+-            incorrectly formatted or non- scalar. Reported by FichteFoll.
+-        
+-        0.15.32 (2017-08-21):
+-          - allow setting ``yaml.default_flow_style = None`` (default: ``False``) for
+-            for ``typ='rt'``.
+-          - fix for issue 149: multiplications on ``ScalarFloat`` now return ``float``
+-            (reported by jan.brezina@tul.cz)
+-        
+-        0.15.31 (2017-08-15):
+-          - fix Comment dumping
+-        
+-        0.15.30 (2017-08-14):
+-          - fix for issue with "compact JSON" not parsing: ``{"in":{},"out":{}}``
+-            (reported on `StackOverflow <https://stackoverflow.com/q/45681626/1307905>`__ by
+-            `mjalkio <https://stackoverflow.com/users/5130525/mjalkio>`_
+-        
+-        0.15.29 (2017-08-14):
+-          - fix issue #51: different indents for mappings and sequences (reported by
+-            Alex Harvey)
+-          - fix for flow sequence/mapping as element/value of block sequence with
+-            sequence-indent minus dash-offset not equal two.
+-        
+-        0.15.28 (2017-08-13):
+-          - fix issue #61: merge of merge cannot be __repr__-ed (reported by Tal Liron)
+-        
+-        0.15.27 (2017-08-13):
+-          - fix issue 62, YAML 1.2 allows ``?`` and ``:`` in plain scalars if non-ambigious
+-            (reported by nowox)
+-          - fix lists within lists which would make comments disappear
+-        
+-        0.15.26 (2017-08-10):
+-          - fix for disappearing comment after empty flow sequence (reported by
+-            oit-tzhimmash)
+-        
+-        0.15.25 (2017-08-09):
+-          - fix for problem with dumping (unloaded) floats (reported by eyenseo)
+-        
+-        0.15.24 (2017-08-09):
+-          - added ScalarFloat which supports roundtripping of 23.1, 23.100,
+-            42.00E+56, 0.0, -0.0 etc. while keeping the format. Underscores in mantissas
+-            are not preserved/supported (yet, is anybody using that?).
+-          - (finally) fixed longstanding issue 23 (reported by `Antony Sottile
+-            <https://bitbucket.org/asottile/>`__), now handling comment between block
+-            mapping key and value correctly
+-          - warn on YAML 1.1 float input that is incorrect (triggered by invalid YAML
+-            provided by Cecil Curry)
+-          - allow setting of boolean representation (`false`, `true`) by using:
+-            ``yaml.boolean_representation = [u'False', u'True']``
+-        
+-        0.15.23 (2017-08-01):
+-          - fix for round_tripping integers on 2.7.X > sys.maxint (reported by ccatterina)
+-        
+-        0.15.22 (2017-07-28):
+-          - fix for round_tripping singe excl. mark tags doubling (reported and fix by Jan Brezina)
+-        
+-        0.15.21 (2017-07-25):
+-          - fix for writing unicode in new API, (reported on
+-            `StackOverflow <https://stackoverflow.com/a/45281922/1307905>`__
+-        
+-        0.15.20 (2017-07-23):
+-          - wheels for windows including C extensions
+-        
+-        0.15.19 (2017-07-13):
+-          - added object constructor for rt, decorator ``yaml_object`` to replace YAMLObject.
+-          - fix for problem using load_all with Path() instance
+-          - fix for load_all in combination with zero indent block style literal
+-            (``pure=True`` only!)
+-        
+-        0.15.18 (2017-07-04):
+-          - missing ``pure`` attribute on ``YAML`` useful for implementing `!include` tag
+-            constructor for `including YAML files in a YAML file
+-            <https://stackoverflow.com/a/44913652/1307905>`__
+-          - some documentation improvements
+-          - trigger of doc build on new revision
+-        
+-        0.15.17 (2017-07-03):
+-          - support for Unicode supplementary Plane **output**
+-            (input was already supported, triggered by
+-            `this <https://stackoverflow.com/a/44875714/1307905>`__ Stack Overflow Q&A)
+-        
+-        0.15.16 (2017-07-01):
+-          - minor typing issues (reported and fix provided by
+-            `Manvendra Singh <https://bitbucket.org/manu-chroma/>`__
+-          - small doc improvements
+-        
+-        0.15.15 (2017-06-27):
+-          - fix for issue 135, typ='safe' not dumping in Python 2.7
+-            (reported by Andrzej Ostrowski <https://bitbucket.org/aostr123/>`__)
+-        
+-        0.15.14 (2017-06-25):
+-          - fix for issue 133, in setup.py: change ModuleNotFoundError to
+-            ImportError (reported and fix by
+-            `Asley Drake  <https://github.com/aldraco>`__)
+-        
+-        0.15.13 (2017-06-24):
+-          - suppress duplicate key warning on mappings with merge keys (reported by
+-            Cameron Sweeney)
+-        
+-        0.15.12 (2017-06-24):
+-          - remove fatal dependency of setup.py on wheel package (reported by
+-            Cameron Sweeney)
+-        
+-        0.15.11 (2017-06-24):
+-          - fix for issue 130, regression in nested merge keys (reported by
+-            `David Fee <https://bitbucket.org/dfee/>`__)
+-        
+-        0.15.10 (2017-06-23):
+-          - top level PreservedScalarString not indented if not explicitly asked to
+-          - remove Makefile (not very useful anyway)
+-          - some mypy additions
+-        
+-        0.15.9 (2017-06-16):
+-          - fix for issue 127: tagged scalars were always quoted and seperated
+-            by a newline when in a block sequence (reported and largely fixed by
+-            `Tommy Wang <https://bitbucket.org/twang817/>`__)
+-        
+-        0.15.8 (2017-06-15):
+-          - allow plug-in install via ``install ruamel.yaml[jinja2]``
+-        
+-        0.15.7 (2017-06-14):
+-          - add plug-in mechanism for load/dump pre resp. post-processing
+-        
+-        0.15.6 (2017-06-10):
+-          - a set() with duplicate elements now throws error in rt loading
+-          - support for toplevel column zero literal/folded scalar in explicit documents
+-        
+-        0.15.5 (2017-06-08):
+-          - repeat `load()` on a single `YAML()` instance would fail.
+-        
+-        0.15.4 (2017-06-08):
+-          - `transform` parameter on dump that expects a function taking a
+-            string and returning a string. This allows transformation of the output
+-            before it is written to stream. This forces creation of the complete output in memory!
+-          - some updates to the docs
+-        
+-        0.15.3 (2017-06-07):
+-          - No longer try to compile C extensions on Windows. Compilation can be forced by setting
+-            the environment variable `RUAMEL_FORCE_EXT_BUILD` to some value
+-            before starting the `pip install`.
+-        
+-        0.15.2 (2017-06-07):
+-          - update to conform to mypy 0.511: mypy --strict
+-        
+-        0.15.1 (2017-06-07):
+-          - `duplicate keys  <http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys>`__
+-            in mappings generate an error (in the old API this change generates a warning until 0.16)
+-          - dependecy on ruamel.ordereddict for 2.7 now via extras_require
+-        
+-        0.15.0 (2017-06-04):
+-          - it is now allowed to pass in a ``pathlib.Path`` as "stream" parameter to all
+-            load/dump functions
+-          - passing in a non-supported object (e.g. a string) as "stream" will result in a
+-            much more meaningful YAMLStreamError.
+-          - assigning a normal string value to an existing CommentedMap key or CommentedSeq
+-            element will result in a value cast to the previous value's type if possible.
+-          - added ``YAML`` class for new API
+-        
+-        0.14.12 (2017-05-14):
+-          - fix for issue 119, deepcopy not returning subclasses (reported and PR by
+-            Constantine Evans <cevans@evanslabs.org>)
+-        
+-        0.14.11 (2017-05-01):
+-          - fix for issue 103 allowing implicit documents after document end marker line (``...``)
+-            in YAML 1.2
+-        
+-        0.14.10 (2017-04-26):
+-          - fix problem with emitting using cyaml
+-        
+-        0.14.9 (2017-04-22):
+-          - remove dependency on ``typing`` while still supporting ``mypy``
+-            (http://stackoverflow.com/a/43516781/1307905)
+-          - fix unclarity in doc that stated 2.6 is supported (reported by feetdust)
+-        
+-        0.14.8 (2017-04-19):
+-          - fix Text not available on 3.5.0 and 3.5.1, now proactively setting version guards
+-            on all files (reported by `João Paulo Magalhães <https://bitbucket.org/jpmag/>`__)
+-        
+-        0.14.7 (2017-04-18):
+-          - round trip of integers (decimal, octal, hex, binary) now preserve
+-            leading zero(s) padding and underscores. Underscores are presumed
+-            to be at regular distances (i.e. ``0o12_345_67`` dumps back as
+-            ``0o1_23_45_67`` as the space from the last digit to the
+-            underscore before that is the determining factor).
+-        
+-        0.14.6 (2017-04-14):
+-          - binary, octal and hex integers are now preserved by default. This
+-            was a known deficiency. Working on this was prompted by the issue report (112)
+-            from devnoname120, as well as the additional experience with `.replace()`
+-            on `scalarstring` classes.
+-          - fix issues 114: cannot install on Buildozer (reported by mixmastamyk).
+-            Setting env. var ``RUAMEL_NO_PIP_INSTALL_CHECK`` will suppress ``pip``-check.
+-        
+-        0.14.5 (2017-04-04):
+-          - fix issue 109: None not dumping correctly at top level (reported by Andrea Censi)
+-          - fix issue 110: .replace on Preserved/DoubleQuoted/SingleQuoted ScalarString
+-            would give back "normal" string (reported by sandres23)
+-        
+-        0.14.4 (2017-03-31):
+-          - fix readme
+-        
+-        0.14.3 (2017-03-31):
+-          - fix for 0o52 not being a string in YAML 1.1 (reported on
+-            `StackOverflow Q&A 43138503 <http://stackoverflow.com/a/43138503/1307905>`__ by
+-            `Frank D <http://stackoverflow.com/users/7796630/frank-d>`__)
+-        
+-        0.14.2 (2017-03-23):
+-          - fix for old default pip on Ubuntu 14.04 (reported by Sébastien Maccagnoni-Munch)
+-        
+-        0.14.1 (2017-03-22):
+-          - fix Text not available on 3.5.0 and 3.5.1 (reported by Charles Bouchard-Légaré)
+-        
+-        0.14.0 (2017-03-21):
+-          - updates for mypy --strict
+-          - preparation for moving away from inheritance in Loader and Dumper, calls from e.g.
+-            the Representer to the Serializer.serialize() are now done via the attribute
+-            .serializer.serialize(). Usage of .serialize() outside of Serializer will be
+-            deprecated soon
+-          - some extra tests on main.py functions
+-        
+-        ----
+-        
+-        For older changes see the file
+-        `CHANGES <https://bitbucket.org/ruamel/yaml/src/default/CHANGES>`_
+-        
+-Keywords: yaml 1.2 parser round-trip preserve quotes order config
+-Platform: UNKNOWN
+-Classifier: Development Status :: 4 - Beta
+-Classifier: Intended Audience :: Developers
+-Classifier: License :: OSI Approved :: MIT License
+-Classifier: Operating System :: OS Independent
+-Classifier: Programming Language :: Python
+-Classifier: Programming Language :: Python :: 2.7
+-Classifier: Programming Language :: Python :: 3.5
+-Classifier: Programming Language :: Python :: 3.6
+-Classifier: Programming Language :: Python :: 3.7
+-Classifier: Programming Language :: Python :: 3.8
+-Classifier: Programming Language :: Python :: Implementation :: CPython
+-Classifier: Programming Language :: Python :: Implementation :: Jython
+-Classifier: Programming Language :: Python :: Implementation :: PyPy
+-Classifier: Topic :: Software Development :: Libraries :: Python Modules
+-Classifier: Topic :: Text Processing :: Markup
+-Classifier: Typing :: Typed
+-Description-Content-Type: text/x-rst
+-Provides-Extra: docs
+-Provides-Extra: jinja2
+diff --git a/dynaconf/vendor/ruamel/yaml/README.rst b/dynaconf/vendor/ruamel/yaml/README.rst
+deleted file mode 100644
+index 2a99cb9..0000000
+--- a/dynaconf/vendor/ruamel/yaml/README.rst
++++ /dev/null
+@@ -1,752 +0,0 @@
+-
+-ruamel.yaml
+-===========
+-
+-``ruamel.yaml`` is a YAML 1.2 loader/dumper package for Python.
+-
+-:version:       0.16.10
+-:updated:       2020-02-12
+-:documentation: http://yaml.readthedocs.io
+-:repository:    https://bitbucket.org/ruamel/yaml
+-:pypi:          https://pypi.org/project/ruamel.yaml/
+-
+-
+-Starting with version 0.15.0 the way YAML files are loaded and dumped
+-is changing. See the API doc for details.  Currently existing
+-functionality will throw a warning before being changed/removed.
+-**For production systems you should pin the version being used with
+-``ruamel.yaml<=0.15``**. There might be bug fixes in the 0.14 series,
+-but new functionality is likely only to be available via the new API.
+-
+-If your package uses ``ruamel.yaml`` and is not listed on PyPI, drop
+-me an email, preferably with some information on how you use the
+-package (or a link to bitbucket/github) and I'll keep you informed
+-when the status of the API is stable enough to make the transition.
+-
+-* `Overview <http://yaml.readthedocs.org/en/latest/overview.html>`_
+-* `Installing <http://yaml.readthedocs.org/en/latest/install.html>`_
+-* `Basic Usage <http://yaml.readthedocs.org/en/latest/basicuse.html>`_
+-* `Details <http://yaml.readthedocs.org/en/latest/detail.html>`_
+-* `Examples <http://yaml.readthedocs.org/en/latest/example.html>`_
+-* `API <http://yaml.readthedocs.org/en/latest/api.html>`_
+-* `Differences with PyYAML <http://yaml.readthedocs.org/en/latest/pyyaml.html>`_
+-
+-.. image:: https://readthedocs.org/projects/yaml/badge/?version=stable
+-   :target: https://yaml.readthedocs.org/en/stable
+-
+-.. image:: https://bestpractices.coreinfrastructure.org/projects/1128/badge
+-   :target: https://bestpractices.coreinfrastructure.org/projects/1128
+-
+-.. image:: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/_doc/_static/license.svg?format=raw
+-   :target: https://opensource.org/licenses/MIT
+-
+-.. image:: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/_doc/_static/pypi.svg?format=raw
+-   :target: https://pypi.org/project/ruamel.yaml/
+-
+-.. image:: https://sourceforge.net/p/oitnb/code/ci/default/tree/_doc/_static/oitnb.svg?format=raw
+-   :target: https://pypi.org/project/oitnb/
+-
+-.. image:: http://www.mypy-lang.org/static/mypy_badge.svg
+-   :target: http://mypy-lang.org/
+-
+-ChangeLog
+-=========
+-
+-.. should insert NEXT: at the beginning of line for next key (with empty line)
+-
+-0.16.10 (2020-02-12):
+-  - (auto) updated image references in README to sourceforge
+-
+-0.16.9 (2020-02-11):
+-  - update CHANGES
+-
+-0.16.8 (2020-02-11):
+-  - update requirements so that ruamel.yaml.clib is installed for 3.8,
+-    as it has become available (via manylinux builds)
+-
+-0.16.7 (2020-01-30):
+-  - fix typchecking issue on TaggedScalar (reported by Jens Nielsen)
+-  - fix error in dumping literal scalar in sequence with comments before element
+-    (reported by `EJ Etherington <https://sourceforge.net/u/ejether/>`__)
+-
+-0.16.6 (2020-01-20):
+-  - fix empty string mapping key roundtripping with preservation of quotes as `? ''`
+-    (reported via email by Tomer Aharoni).
+-  - fix incorrect state setting in class constructor (reported by `Douglas Raillard
+-    <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__)
+-  - adjust deprecation warning test for Hashable, as that no longer warns (reported
+-    by `Jason Montleon <https://bitbucket.org/%7B8f377d12-8d5b-4069-a662-00a2674fee4e%7D/>`__)
+-
+-0.16.5 (2019-08-18):
+-  - allow for ``YAML(typ=['unsafe', 'pytypes'])``
+-
+-0.16.4 (2019-08-16):
+-  - fix output of TAG directives with # (reported by `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-
+-0.16.3 (2019-08-15):
+-  - split construct_object
+-  - change stuff back to keep mypy happy
+-  - move setting of version based on YAML directive to scanner, allowing to
+-    check for file version during TAG directive scanning
+-
+-0.16.2 (2019-08-15):
+-  - preserve YAML and TAG directives on roundtrip, correctly output #
+-    in URL for YAML 1.2 (both reported by `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-0.16.1 (2019-08-08):
+-  - Force the use of new version of ruamel.yaml.clib (reported by `Alex Joz
+-    <https://bitbucket.org/%7B9af55900-2534-4212-976c-61339b6ffe14%7D/>`__)
+-  - Allow '#' in tag URI as these are allowed in YAML 1.2 (reported by
+-    `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-0.16.0 (2019-07-25):
+-  - split of C source that generates .so file to ruamel.yaml.clib
+-  - duplicate keys are now an error when working with the old API as well
+-
+-0.15.100 (2019-07-17):
+-  - fixing issue with dumping deep-copied data from commented YAML, by
+-    providing both the memo parameter to __deepcopy__, and by allowing
+-    startmarks to be compared on their content (reported by `Theofilos
+-    Petsios
+-    <https://bitbucket.org/%7Be550bc5d-403d-4fda-820b-bebbe71796d3%7D/>`__)
+-
+-0.15.99 (2019-07-12):
+-  - add `py.typed` to distribution, based on a PR submitted by
+-    `Michael Crusoe
+-    <https://bitbucket.org/%7Bc9fbde69-e746-48f5-900d-34992b7860c8%7D/>`__
+-  - merge PR 40 (also by Michael Crusoe) to more accurately specify
+-    repository in the README (also reported in a misunderstood issue
+-    some time ago)
+-
+-0.15.98 (2019-07-09):
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.12, needed
+-    for Python 3.8.0b2 (reported by `John Vandenberg
+-    <https://bitbucket.org/%7B6d4e8487-3c97-4dab-a060-088ec50c682c%7D/>`__)
+-
+-0.15.97 (2019-06-06):
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.10, needed for
+-    Python 3.8.0b1
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.9, needed for
+-    Python 3.8.0a4 (reported by `Anthony Sottile
+-    <https://bitbucket.org/%7B569cc8ea-0d9e-41cb-94a4-19ea517324df%7D/>`__)
+-
+-0.15.96 (2019-05-16):
+-  - fix failure to indent comments on round-trip anchored block style
+-    scalars in block sequence (reported by `William Kimball
+-    <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-
+-0.15.95 (2019-05-16):
+-  - fix failure to round-trip anchored scalars in block sequence
+-    (reported by `William Kimball
+-    <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-  - wheel files for Python 3.4 no longer provided (`Python 3.4 EOL 2019-03-18
+-    <https://www.python.org/dev/peps/pep-0429/>`__)
+-
+-0.15.94 (2019-04-23):
+-  - fix missing line-break after end-of-file comments not ending in
+-    line-break (reported by `Philip Thompson
+-    <https://bitbucket.org/%7Be42ba205-0876-4151-bcbe-ccaea5bd13ce%7D/>`__)
+-
+-0.15.93 (2019-04-21):
+-  - fix failure to parse empty implicit flow mapping key
+-  - in YAML 1.1 plains scalars `y`, 'n', `Y`, and 'N' are now
+-    correctly recognised as booleans and such strings dumped quoted
+-    (reported by `Marcel Bollmann
+-    <https://bitbucket.org/%7Bd8850921-9145-4ad0-ac30-64c3bd9b036d%7D/>`__)
+-
+-0.15.92 (2019-04-16):
+-  - fix failure to parse empty implicit block mapping key (reported by 
+-    `Nolan W <https://bitbucket.org/i2labs/>`__)
+-
+-0.15.91 (2019-04-05):
+-  - allowing duplicate keys would not work for merge keys (reported by mamacdon on
+-    `StackOverflow <https://stackoverflow.com/questions/55540686/>`__ 
+-
+-0.15.90 (2019-04-04):
+-  - fix issue with updating `CommentedMap` from list of tuples (reported by 
+-    `Peter Henry <https://bitbucket.org/mosbasik/>`__)
+-
+-0.15.89 (2019-02-27):
+-  - fix for items with flow-mapping in block sequence output on single line
+-    (reported by `Zahari Dim <https://bitbucket.org/zahari_dim/>`__)
+-  - fix for safe dumping erroring in creation of representereror when dumping namedtuple
+-    (reported and solution by `Jaakko Kantojärvi <https://bitbucket.org/raphendyr/>`__)
+-
+-0.15.88 (2019-02-12):
+-  - fix inclusing of python code from the subpackage data (containing extra tests,
+-    reported by `Florian Apolloner <https://bitbucket.org/apollo13/>`__)
+-
+-0.15.87 (2019-01-22):
+-  - fix problem with empty lists and the code to reinsert merge keys (reported via email 
+-     by Zaloo)
+-
+-0.15.86 (2019-01-16):
+-  - reinsert merge key in its old position (reported by grumbler on
+-    `StackOverflow <https://stackoverflow.com/a/54206512/1307905>`__)
+-  - fix for issue with non-ASCII anchor names (reported and fix
+-    provided by Dandaleon Flux via email)
+-  - fix for issue when parsing flow mapping value starting with colon (in pure Python only)
+-    (reported by `FichteFoll <https://bitbucket.org/FichteFoll/>`__)
+-
+-0.15.85 (2019-01-08):
+-  - the types used by ``SafeConstructor`` for mappings and sequences can
+-    now by set by assigning to ``XXXConstructor.yaml_base_dict_type``
+-    (and ``..._list_type``), preventing the need to copy two methods
+-    with 50+ lines that had ``var = {}`` hardcoded.  (Implemented to
+-    help solve an feature request by `Anthony Sottile
+-    <https://bitbucket.org/asottile/>`__ in an easier way)
+-
+-0.15.84 (2019-01-07):
+-  - fix for ``CommentedMap.copy()`` not returning ``CommentedMap``, let alone copying comments etc.
+-    (reported by `Anthony Sottile <https://bitbucket.org/asottile/>`__)
+-
+-0.15.83 (2019-01-02):
+-  - fix for bug in roundtripping aliases used as key (reported via email by Zaloo)
+-
+-0.15.82 (2018-12-28):
+-  - anchors and aliases on scalar int, float, string and bool are now preserved. Anchors
+-    do not need a referring alias for these (reported by 
+-    `Alex Harvey <https://bitbucket.org/alexharv074/>`__)
+-  - anchors no longer lost on tagged objects when roundtripping (reported by `Zaloo 
+-    <https://bitbucket.org/zaloo/>`__)
+-
+-0.15.81 (2018-12-06):
+-  - fix issue dumping methods of metaclass derived classes (reported and fix provided
+-    by `Douglas Raillard <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-0.15.80 (2018-11-26):
+-  - fix issue emitting BEL character when round-tripping invalid folded input
+-    (reported by Isaac on `StackOverflow <https://stackoverflow.com/a/53471217/1307905>`__)
+-    
+-0.15.79 (2018-11-21):
+-  - fix issue with anchors nested deeper than alias (reported by gaFF on
+-    `StackOverflow <https://stackoverflow.com/a/53397781/1307905>`__)
+-
+-0.15.78 (2018-11-15):
+-  - fix setup issue for 3.8 (reported by `Sidney Kuyateh 
+-    <https://bitbucket.org/autinerd/>`__)
+-
+-0.15.77 (2018-11-09):
+-  - setting `yaml.sort_base_mapping_type_on_output = False`, will prevent
+-    explicit sorting by keys in the base representer of mappings. Roundtrip
+-    already did not do this. Usage only makes real sense for Python 3.6+
+-    (feature request by `Sebastian Gerber <https://bitbucket.org/spacemanspiff2007/>`__).
+-  - implement Python version check in YAML metadata in ``_test/test_z_data.py``
+-
+-0.15.76 (2018-11-01):
+-  - fix issue with empty mapping and sequence loaded as flow-style
+-    (mapping reported by `Min RK <https://bitbucket.org/minrk/>`__, sequence
+-    by `Maged Ahmed <https://bitbucket.org/maged2/>`__)
+-
+-0.15.75 (2018-10-27):
+-  - fix issue with single '?' scalar (reported by `Terrance 
+-    <https://bitbucket.org/OllieTerrance/>`__)
+-  - fix issue with duplicate merge keys (prompted by `answering 
+-    <https://stackoverflow.com/a/52852106/1307905>`__ a 
+-    `StackOverflow question <https://stackoverflow.com/q/52851168/1307905>`__
+-    by `math <https://stackoverflow.com/users/1355634/math>`__)
+-
+-0.15.74 (2018-10-17):
+-  - fix dropping of comment on rt before sequence item that is sequence item
+-    (reported by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-
+-0.15.73 (2018-10-16):
+-  - fix irregular output on pre-comment in sequence within sequence (reported
+-    by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-  - allow non-compact (i.e. next line) dumping sequence/mapping within sequence.
+-
+-0.15.72 (2018-10-06):
+-  - fix regression on explicit 1.1 loading with the C based scanner/parser
+-    (reported by `Tomas Vavra <https://bitbucket.org/xtomik/>`__)
+-
+-0.15.71 (2018-09-26):
+-  - some of the tests now live in YAML files in the 
+-    `yaml.data <https://bitbucket.org/ruamel/yaml.data>`__ repository. 
+-    ``_test/test_z_data.py`` processes these.
+-  - fix regression where handcrafted CommentedMaps could not be initiated (reported by 
+-    `Dan Helfman <https://bitbucket.org/dhelfman/>`__)
+-  - fix regression with non-root literal scalars that needed indent indicator
+-    (reported by `Clark Breyman <https://bitbucket.org/clarkbreyman/>`__)
+-  - tag:yaml.org,2002:python/object/apply now also uses __qualname__ on PY3
+-    (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-  - issue with self-referring object creation
+-    (reported and fix by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-0.15.70 (2018-09-21):
+-  - reverted CommentedMap and CommentedSeq to subclass ordereddict resp. list,
+-    reimplemented merge maps so that both ``dict(**commented_map_instance)`` and JSON
+-    dumping works. This also allows checking with ``isinstance()`` on ``dict`` resp. ``list``.
+-    (Proposed by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__, with feedback
+-    from `blhsing <https://stackoverflow.com/users/6890912/blhsing>`__ on
+-    `StackOverflow <https://stackoverflow.com/q/52314186/1307905>`__)
+-
+-0.15.69 (2018-09-20):
+-  - fix issue with dump_all gobbling end-of-document comments on parsing
+-    (reported by `Pierre B. <https://bitbucket.org/octplane/>`__)
+-
+-0.15.68 (2018-09-20):
+-  - fix issue with parsabel, but incorrect output with nested flow-style sequences
+-    (reported by `Dougal Seeley <https://bitbucket.org/dseeley/>`__)
+-  - fix issue with loading Python objects that have __setstate__ and recursion in parameters
+-    (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-0.15.67 (2018-09-19):
+-  - fix issue with extra space inserted with non-root literal strings 
+-    (Issue reported and PR with fix provided by 
+-    `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-
+-0.15.66 (2018-09-07):
+-  - fix issue with fold indicating characters inserted in safe_load-ed folded strings
+-    (reported by `Maximilian Hils <https://bitbucket.org/mhils/>`__).
+-
+-0.15.65 (2018-09-07):
+-  - fix issue #232 revert to throw ParserError for unexcpected ``]``
+-    and ``}`` instead of IndexError. (Issue reported and PR with fix
+-    provided by `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-  - added ``key`` and ``reverse`` parameter (suggested by Jannik Klemm via email)
+-  - indent root level literal scalars that have directive or document end markers
+-    at the beginning of a line
+-
+-0.15.64 (2018-08-30):
+-  - support round-trip of tagged sequences: ``!Arg [a, {b: 1}]``
+-  - single entry mappings in flow sequences now written by default without braces,
+-    set ``yaml.brace_single_entry_mapping_in_flow_sequence=True`` to force
+-    getting ``[a, {b: 1}, {c: {d: 2}}]`` instead of the default ``[a, b: 1, c: {d: 2}]``
+-  - fix issue when roundtripping floats starting with a dot such as ``.5``
+-    (reported by `Harrison Gregg <https://bitbucket.org/HarrisonGregg/>`__)
+-
+-0.15.63 (2018-08-29):
+-  - small fix only necessary for Windows users that don't use wheels.
+-
+-0.15.62 (2018-08-29):
+-  - C based reader/scanner & emitter now allow setting of 1.2 as YAML version.
+-    ** The loading/dumping is still YAML 1.1 code**, so use the common subset of
+-    YAML 1.2 and 1.1 (reported by `Ge Yang <https://bitbucket.org/yangge/>`__)
+-
+-0.15.61 (2018-08-23):
+-  - support for round-tripping folded style scalars (initially requested 
+-    by `Johnathan Viduchinsky <https://bitbucket.org/johnathanvidu/>`__)
+-  - update of C code
+-  - speed up of scanning (~30% depending on the input)
+-
+-0.15.60 (2018-08-18):
+-  - again allow single entry map in flow sequence context (reported by 
+-    `Lee Goolsbee <https://bitbucket.org/lgoolsbee/>`__)
+-  - cleanup for mypy 
+-  - spurious print in library (reported by 
+-    `Lele Gaifax <https://bitbucket.org/lele/>`__), now automatically checked 
+-
+-0.15.59 (2018-08-17):
+-  - issue with C based loader and leading zeros (reported by 
+-    `Tom Hamilton Stubber <https://bitbucket.org/TomHamiltonStubber/>`__)
+-
+-0.15.58 (2018-08-17):
+-  - simple mappings can now be used as keys when round-tripping::
+-
+-      {a: 1, b: 2}: hello world
+-      
+-    although using the obvious operations (del, popitem) on the key will
+-    fail, you can mutilate it by going through its attributes. If you load the
+-    above YAML in `d`, then changing the value is cumbersome:
+-
+-        d = {CommentedKeyMap([('a', 1), ('b', 2)]): "goodbye"}
+-
+-    and changing the key even more so:
+-
+-        d[CommentedKeyMap([('b', 1), ('a', 2)])] = d.pop(
+-                     CommentedKeyMap([('a', 1), ('b', 2)]))
+-
+-    (you can use a `dict` instead of a list of tuples (or ordereddict), but that might result
+-    in a different order, of the keys of the key, in the output)
+-  - check integers to dump with 1.2 patterns instead of 1.1 (reported by 
+-    `Lele Gaifax <https://bitbucket.org/lele/>`__)
+-  
+-
+-0.15.57 (2018-08-15):
+-  - Fix that CommentedSeq could no longer be used in adding or do a sort
+-    (reported by `Christopher Wright <https://bitbucket.org/CJ-Wright4242/>`__)
+-
+-0.15.56 (2018-08-15):
+-  - fix issue with ``python -O`` optimizing away code (reported, and detailed cause
+-    pinpointed, by `Alex Grönholm <https://bitbucket.org/agronholm/>`__)
+-
+-0.15.55 (2018-08-14):
+-  - unmade ``CommentedSeq`` a subclass of ``list``. It is now
+-    indirectly a subclass of the standard
+-    ``collections.abc.MutableSequence`` (without .abc if you are
+-    still on Python2.7). If you do ``isinstance(yaml.load('[1, 2]'),
+-    list)``) anywhere in your code replace ``list`` with
+-    ``MutableSequence``.  Directly, ``CommentedSeq`` is a subclass of
+-    the abstract baseclass ``ruamel.yaml.compat.MutableScliceableSequence``,
+-    with the result that *(extended) slicing is supported on 
+-    ``CommentedSeq``*.
+-    (reported by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__)
+-  - duplicate keys (or their values) with non-ascii now correctly
+-    report in Python2, instead of raising a Unicode error.
+-    (Reported by `Jonathan Pyle <https://bitbucket.org/jonathan_pyle/>`__)
+-
+-0.15.54 (2018-08-13):
+-  - fix issue where a comment could pop-up twice in the output (reported by 
+-    `Mike Kazantsev <https://bitbucket.org/mk_fg/>`__ and by 
+-    `Nate Peterson <https://bitbucket.org/ndpete21/>`__)
+-  - fix issue where JSON object (mapping) without spaces was not parsed
+-    properly (reported by `Marc Schmidt <https://bitbucket.org/marcj/>`__)
+-  - fix issue where comments after empty flow-style mappings were not emitted
+-    (reported by `Qinfench Chen <https://bitbucket.org/flyin5ish/>`__)
+-
+-0.15.53 (2018-08-12):
+-  - fix issue with flow style mapping with comments gobbled newline (reported
+-    by `Christopher Lambert <https://bitbucket.org/XN137/>`__)
+-  - fix issue where single '+' under YAML 1.2 was interpreted as
+-    integer, erroring out (reported by `Jethro Yu
+-    <https://bitbucket.org/jcppkkk/>`__)
+-
+-0.15.52 (2018-08-09):
+-  - added `.copy()` mapping representation for round-tripping
+-    (``CommentedMap``) to fix incomplete copies of merged mappings
+-    (reported by `Will Richards
+-    <https://bitbucket.org/will_richards/>`__) 
+-  - Also unmade that class a subclass of ordereddict to solve incorrect behaviour
+-    for ``{**merged-mapping}`` and ``dict(**merged-mapping)`` (reported independently by
+-    `Tim Olsson <https://bitbucket.org/tgolsson/>`__ and 
+-    `Filip Matzner <https://bitbucket.org/FloopCZ/>`__)
+-
+-0.15.51 (2018-08-08):
+-  - Fix method name dumps (were not dotted) and loads (reported by `Douglas Raillard 
+-    <https://bitbucket.org/DouglasRaillard/>`__)
+-  - Fix spurious trailing white-space caused when the comment start
+-    column was no longer reached and there was no actual EOL comment
+-    (e.g. following empty line) and doing substitutions, or when
+-    quotes around scalars got dropped.  (reported by `Thomas Guillet
+-    <https://bitbucket.org/guillett/>`__)
+-
+-0.15.50 (2018-08-05):
+-  - Allow ``YAML()`` as a context manager for output, thereby making it much easier
+-    to generate multi-documents in a stream. 
+-  - Fix issue with incorrect type information for `load()` and `dump()` (reported 
+-    by `Jimbo Jim <https://bitbucket.org/jimbo1qaz/>`__)
+-
+-0.15.49 (2018-08-05):
+-  - fix preservation of leading newlines in root level literal style scalar,
+-    and preserve comment after literal style indicator (``|  # some comment``)
+-    Both needed for round-tripping multi-doc streams in 
+-    `ryd <https://pypi.org/project/ryd/>`__.
+-
+-0.15.48 (2018-08-03):
+-  - housekeeping: ``oitnb`` for formatting, mypy 0.620 upgrade and conformity
+-
+-0.15.47 (2018-07-31):
+-  - fix broken 3.6 manylinux1, the result of an unclean ``build`` (reported by 
+-    `Roman Sichnyi <https://bitbucket.org/rsichnyi-gl/>`__)
+-
+-
+-0.15.46 (2018-07-29):
+-  - fixed DeprecationWarning for importing from ``collections`` on 3.7
+-    (issue 210, reported by `Reinoud Elhorst
+-    <https://bitbucket.org/reinhrst/>`__). It was `difficult to find
+-    why tox/pytest did not report
+-    <https://stackoverflow.com/q/51573204/1307905>`__ and as time
+-    consuming to actually `fix
+-    <https://stackoverflow.com/a/51573205/1307905>`__ the tests.
+-
+-0.15.45 (2018-07-26):
+-  - After adding failing test for ``YAML.load_all(Path())``, remove StopIteration 
+-    (PR provided by `Zachary Buhman <https://bitbucket.org/buhman/>`__,
+-    also reported by `Steven Hiscocks <https://bitbucket.org/sdhiscocks/>`__.
+-
+-0.15.44 (2018-07-14):
+-  - Correct loading plain scalars consisting of numerals only and
+-    starting with `0`, when not explicitly specifying YAML version
+-    1.1. This also fixes the issue about dumping string `'019'` as
+-    plain scalars as reported by `Min RK
+-    <https://bitbucket.org/minrk/>`__, that prompted this chance.
+-
+-0.15.43 (2018-07-12):
+-  - merge PR33: Python2.7 on Windows is narrow, but has no
+-    ``sysconfig.get_config_var('Py_UNICODE_SIZE')``. (merge provided by
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__)
+-  - ``register_class()`` now returns class (proposed by
+-    `Mike Nerone <https://bitbucket.org/Manganeez/>`__}
+-
+-0.15.42 (2018-07-01):
+-  - fix regression showing only on narrow Python 2.7 (py27mu) builds
+-    (with help from
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__ and
+-    `Colm O'Connor <https://bitbucket.org/colmoconnorgithub/>`__).
+-  - run pre-commit ``tox`` on Python 2.7 wide and narrow, as well as
+-    3.4/3.5/3.6/3.7/pypy
+-
+-0.15.41 (2018-06-27):
+-  - add detection of C-compile failure (investigation prompted by
+-    `StackOverlow <https://stackoverflow.com/a/51057399/1307905>`__ by
+-    `Emmanuel Blot <https://stackoverflow.com/users/8233409/emmanuel-blot>`__),
+-    which was removed while no longer dependent on ``libyaml``, C-extensions
+-    compilation still needs a compiler though.
+-
+-0.15.40 (2018-06-18):
+-  - added links to landing places as suggested in issue 190 by
+-    `KostisA <https://bitbucket.org/ankostis/>`__
+-  - fixes issue #201: decoding unicode escaped tags on Python2, reported
+-    by `Dan Abolafia <https://bitbucket.org/danabo/>`__
+-
+-0.15.39 (2018-06-17):
+-  - merge PR27 improving package startup time (and loading when regexp not
+-    actually used), provided by
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__
+-
+-0.15.38 (2018-06-13):
+-  - fix for losing precision when roundtripping floats by
+-    `Rolf Wojtech <https://bitbucket.org/asomov/>`__
+-  - fix for hardcoded dir separator not working for Windows by
+-    `Nuno André <https://bitbucket.org/nu_no/>`__
+-  - typo fix by `Andrey Somov <https://bitbucket.org/asomov/>`__
+-
+-0.15.37 (2018-03-21):
+-  - again trying to create installable files for 187
+-
+-0.15.36 (2018-02-07):
+-  - fix issue 187, incompatibility of C extension with 3.7 (reported by
+-    Daniel Blanchard)
+-
+-0.15.35 (2017-12-03):
+-  - allow ``None`` as stream when specifying ``transform`` parameters to
+-    ``YAML.dump()``.
+-    This is useful if the transforming function doesn't return a meaningful value
+-    (inspired by `StackOverflow <https://stackoverflow.com/q/47614862/1307905>`__ by
+-    `rsaw <https://stackoverflow.com/users/406281/rsaw>`__).
+-
+-0.15.34 (2017-09-17):
+-  - fix for issue 157: CDumper not dumping floats (reported by Jan Smitka)
+-
+-0.15.33 (2017-08-31):
+-  - support for "undefined" round-tripping tagged scalar objects (in addition to
+-    tagged mapping object). Inspired by a use case presented by Matthew Patton
+-    on `StackOverflow <https://stackoverflow.com/a/45967047/1307905>`__.
+-  - fix issue 148: replace cryptic error message when using !!timestamp with an
+-    incorrectly formatted or non- scalar. Reported by FichteFoll.
+-
+-0.15.32 (2017-08-21):
+-  - allow setting ``yaml.default_flow_style = None`` (default: ``False``) for
+-    for ``typ='rt'``.
+-  - fix for issue 149: multiplications on ``ScalarFloat`` now return ``float``
+-    (reported by jan.brezina@tul.cz)
+-
+-0.15.31 (2017-08-15):
+-  - fix Comment dumping
+-
+-0.15.30 (2017-08-14):
+-  - fix for issue with "compact JSON" not parsing: ``{"in":{},"out":{}}``
+-    (reported on `StackOverflow <https://stackoverflow.com/q/45681626/1307905>`__ by
+-    `mjalkio <https://stackoverflow.com/users/5130525/mjalkio>`_
+-
+-0.15.29 (2017-08-14):
+-  - fix issue #51: different indents for mappings and sequences (reported by
+-    Alex Harvey)
+-  - fix for flow sequence/mapping as element/value of block sequence with
+-    sequence-indent minus dash-offset not equal two.
+-
+-0.15.28 (2017-08-13):
+-  - fix issue #61: merge of merge cannot be __repr__-ed (reported by Tal Liron)
+-
+-0.15.27 (2017-08-13):
+-  - fix issue 62, YAML 1.2 allows ``?`` and ``:`` in plain scalars if non-ambigious
+-    (reported by nowox)
+-  - fix lists within lists which would make comments disappear
+-
+-0.15.26 (2017-08-10):
+-  - fix for disappearing comment after empty flow sequence (reported by
+-    oit-tzhimmash)
+-
+-0.15.25 (2017-08-09):
+-  - fix for problem with dumping (unloaded) floats (reported by eyenseo)
+-
+-0.15.24 (2017-08-09):
+-  - added ScalarFloat which supports roundtripping of 23.1, 23.100,
+-    42.00E+56, 0.0, -0.0 etc. while keeping the format. Underscores in mantissas
+-    are not preserved/supported (yet, is anybody using that?).
+-  - (finally) fixed longstanding issue 23 (reported by `Antony Sottile
+-    <https://bitbucket.org/asottile/>`__), now handling comment between block
+-    mapping key and value correctly
+-  - warn on YAML 1.1 float input that is incorrect (triggered by invalid YAML
+-    provided by Cecil Curry)
+-  - allow setting of boolean representation (`false`, `true`) by using:
+-    ``yaml.boolean_representation = [u'False', u'True']``
+-
+-0.15.23 (2017-08-01):
+-  - fix for round_tripping integers on 2.7.X > sys.maxint (reported by ccatterina)
+-
+-0.15.22 (2017-07-28):
+-  - fix for round_tripping singe excl. mark tags doubling (reported and fix by Jan Brezina)
+-
+-0.15.21 (2017-07-25):
+-  - fix for writing unicode in new API, (reported on
+-    `StackOverflow <https://stackoverflow.com/a/45281922/1307905>`__
+-
+-0.15.20 (2017-07-23):
+-  - wheels for windows including C extensions
+-
+-0.15.19 (2017-07-13):
+-  - added object constructor for rt, decorator ``yaml_object`` to replace YAMLObject.
+-  - fix for problem using load_all with Path() instance
+-  - fix for load_all in combination with zero indent block style literal
+-    (``pure=True`` only!)
+-
+-0.15.18 (2017-07-04):
+-  - missing ``pure`` attribute on ``YAML`` useful for implementing `!include` tag
+-    constructor for `including YAML files in a YAML file
+-    <https://stackoverflow.com/a/44913652/1307905>`__
+-  - some documentation improvements
+-  - trigger of doc build on new revision
+-
+-0.15.17 (2017-07-03):
+-  - support for Unicode supplementary Plane **output**
+-    (input was already supported, triggered by
+-    `this <https://stackoverflow.com/a/44875714/1307905>`__ Stack Overflow Q&A)
+-
+-0.15.16 (2017-07-01):
+-  - minor typing issues (reported and fix provided by
+-    `Manvendra Singh <https://bitbucket.org/manu-chroma/>`__
+-  - small doc improvements
+-
+-0.15.15 (2017-06-27):
+-  - fix for issue 135, typ='safe' not dumping in Python 2.7
+-    (reported by Andrzej Ostrowski <https://bitbucket.org/aostr123/>`__)
+-
+-0.15.14 (2017-06-25):
+-  - fix for issue 133, in setup.py: change ModuleNotFoundError to
+-    ImportError (reported and fix by
+-    `Asley Drake  <https://github.com/aldraco>`__)
+-
+-0.15.13 (2017-06-24):
+-  - suppress duplicate key warning on mappings with merge keys (reported by
+-    Cameron Sweeney)
+-
+-0.15.12 (2017-06-24):
+-  - remove fatal dependency of setup.py on wheel package (reported by
+-    Cameron Sweeney)
+-
+-0.15.11 (2017-06-24):
+-  - fix for issue 130, regression in nested merge keys (reported by
+-    `David Fee <https://bitbucket.org/dfee/>`__)
+-
+-0.15.10 (2017-06-23):
+-  - top level PreservedScalarString not indented if not explicitly asked to
+-  - remove Makefile (not very useful anyway)
+-  - some mypy additions
+-
+-0.15.9 (2017-06-16):
+-  - fix for issue 127: tagged scalars were always quoted and seperated
+-    by a newline when in a block sequence (reported and largely fixed by
+-    `Tommy Wang <https://bitbucket.org/twang817/>`__)
+-
+-0.15.8 (2017-06-15):
+-  - allow plug-in install via ``install ruamel.yaml[jinja2]``
+-
+-0.15.7 (2017-06-14):
+-  - add plug-in mechanism for load/dump pre resp. post-processing
+-
+-0.15.6 (2017-06-10):
+-  - a set() with duplicate elements now throws error in rt loading
+-  - support for toplevel column zero literal/folded scalar in explicit documents
+-
+-0.15.5 (2017-06-08):
+-  - repeat `load()` on a single `YAML()` instance would fail.
+-
+-0.15.4 (2017-06-08):
+-  - `transform` parameter on dump that expects a function taking a
+-    string and returning a string. This allows transformation of the output
+-    before it is written to stream. This forces creation of the complete output in memory!
+-  - some updates to the docs
+-
+-0.15.3 (2017-06-07):
+-  - No longer try to compile C extensions on Windows. Compilation can be forced by setting
+-    the environment variable `RUAMEL_FORCE_EXT_BUILD` to some value
+-    before starting the `pip install`.
+-
+-0.15.2 (2017-06-07):
+-  - update to conform to mypy 0.511: mypy --strict
+-
+-0.15.1 (2017-06-07):
+-  - `duplicate keys  <http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys>`__
+-    in mappings generate an error (in the old API this change generates a warning until 0.16)
+-  - dependecy on ruamel.ordereddict for 2.7 now via extras_require
+-
+-0.15.0 (2017-06-04):
+-  - it is now allowed to pass in a ``pathlib.Path`` as "stream" parameter to all
+-    load/dump functions
+-  - passing in a non-supported object (e.g. a string) as "stream" will result in a
+-    much more meaningful YAMLStreamError.
+-  - assigning a normal string value to an existing CommentedMap key or CommentedSeq
+-    element will result in a value cast to the previous value's type if possible.
+-  - added ``YAML`` class for new API
+-
+-0.14.12 (2017-05-14):
+-  - fix for issue 119, deepcopy not returning subclasses (reported and PR by
+-    Constantine Evans <cevans@evanslabs.org>)
+-
+-0.14.11 (2017-05-01):
+-  - fix for issue 103 allowing implicit documents after document end marker line (``...``)
+-    in YAML 1.2
+-
+-0.14.10 (2017-04-26):
+-  - fix problem with emitting using cyaml
+-
+-0.14.9 (2017-04-22):
+-  - remove dependency on ``typing`` while still supporting ``mypy``
+-    (http://stackoverflow.com/a/43516781/1307905)
+-  - fix unclarity in doc that stated 2.6 is supported (reported by feetdust)
+-
+-0.14.8 (2017-04-19):
+-  - fix Text not available on 3.5.0 and 3.5.1, now proactively setting version guards
+-    on all files (reported by `João Paulo Magalhães <https://bitbucket.org/jpmag/>`__)
+-
+-0.14.7 (2017-04-18):
+-  - round trip of integers (decimal, octal, hex, binary) now preserve
+-    leading zero(s) padding and underscores. Underscores are presumed
+-    to be at regular distances (i.e. ``0o12_345_67`` dumps back as
+-    ``0o1_23_45_67`` as the space from the last digit to the
+-    underscore before that is the determining factor).
+-
+-0.14.6 (2017-04-14):
+-  - binary, octal and hex integers are now preserved by default. This
+-    was a known deficiency. Working on this was prompted by the issue report (112)
+-    from devnoname120, as well as the additional experience with `.replace()`
+-    on `scalarstring` classes.
+-  - fix issues 114: cannot install on Buildozer (reported by mixmastamyk).
+-    Setting env. var ``RUAMEL_NO_PIP_INSTALL_CHECK`` will suppress ``pip``-check.
+-
+-0.14.5 (2017-04-04):
+-  - fix issue 109: None not dumping correctly at top level (reported by Andrea Censi)
+-  - fix issue 110: .replace on Preserved/DoubleQuoted/SingleQuoted ScalarString
+-    would give back "normal" string (reported by sandres23)
+-
+-0.14.4 (2017-03-31):
+-  - fix readme
+-
+-0.14.3 (2017-03-31):
+-  - fix for 0o52 not being a string in YAML 1.1 (reported on
+-    `StackOverflow Q&A 43138503 <http://stackoverflow.com/a/43138503/1307905>`__ by
+-    `Frank D <http://stackoverflow.com/users/7796630/frank-d>`__)
+-
+-0.14.2 (2017-03-23):
+-  - fix for old default pip on Ubuntu 14.04 (reported by Sébastien Maccagnoni-Munch)
+-
+-0.14.1 (2017-03-22):
+-  - fix Text not available on 3.5.0 and 3.5.1 (reported by Charles Bouchard-Légaré)
+-
+-0.14.0 (2017-03-21):
+-  - updates for mypy --strict
+-  - preparation for moving away from inheritance in Loader and Dumper, calls from e.g.
+-    the Representer to the Serializer.serialize() are now done via the attribute
+-    .serializer.serialize(). Usage of .serialize() outside of Serializer will be
+-    deprecated soon
+-  - some extra tests on main.py functions
+-
+-----
+-
+-For older changes see the file
+-`CHANGES <https://bitbucket.org/ruamel/yaml/src/default/CHANGES>`_
+diff --git a/dynaconf/vendor/ruamel/yaml/__init__.py b/dynaconf/vendor/ruamel/yaml/__init__.py
+deleted file mode 100644
+index ac49423..0000000
+--- a/dynaconf/vendor/ruamel/yaml/__init__.py
++++ /dev/null
+@@ -1,10 +0,0 @@
+-from __future__ import print_function,absolute_import,division,unicode_literals
+-_B='yaml'
+-_A=False
+-if _A:from typing import Dict,Any
+-_package_data=dict(full_package_name='ruamel.yaml',version_info=(0,16,10),__version__='0.16.10',author='Anthon van der Neut',author_email='a.van.der.neut@ruamel.eu',description='ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order',entry_points=None,since=2014,extras_require={':platform_python_implementation=="CPython" and python_version<="2.7"':['ruamel.ordereddict'],':platform_python_implementation=="CPython" and python_version<"3.9"':['ruamel.yaml.clib>=0.1.2'],'jinja2':['ruamel.yaml.jinja2>=0.2'],'docs':['ryd']},classifiers=['Programming Language :: Python :: 2.7','Programming Language :: Python :: 3.5','Programming Language :: Python :: 3.6','Programming Language :: Python :: 3.7','Programming Language :: Python :: 3.8','Programming Language :: Python :: Implementation :: CPython','Programming Language :: Python :: Implementation :: PyPy','Programming Language :: Python :: Implementation :: Jython','Topic :: Software Development :: Libraries :: Python Modules','Topic :: Text Processing :: Markup','Typing :: Typed'],keywords='yaml 1.2 parser round-trip preserve quotes order config',read_the_docs=_B,supported=[(2,7),(3,5)],tox=dict(env='*',deps='ruamel.std.pathlib',fl8excl='_test/lib'),universal=True,rtfd=_B)
+-version_info=_package_data['version_info']
+-__version__=_package_data['__version__']
+-try:from .cyaml import *;__with_libyaml__=True
+-except (ImportError,ValueError):__with_libyaml__=_A
+-from dynaconf.vendor.ruamel.yaml.main import *
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/anchor.py b/dynaconf/vendor/ruamel/yaml/anchor.py
+deleted file mode 100644
+index 8327508..0000000
+--- a/dynaconf/vendor/ruamel/yaml/anchor.py
++++ /dev/null
+@@ -1,7 +0,0 @@
+-_A=False
+-if _A:from typing import Any,Dict,Optional,List,Union,Optional,Iterator
+-anchor_attrib='_yaml_anchor'
+-class Anchor:
+-	__slots__='value','always_dump';attrib=anchor_attrib
+-	def __init__(A):A.value=None;A.always_dump=_A
+-	def __repr__(A):B=', (always dump)'if A.always_dump else'';return 'Anchor({!r}{})'.format(A.value,B)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/comments.py b/dynaconf/vendor/ruamel/yaml/comments.py
+deleted file mode 100644
+index da872f8..0000000
+--- a/dynaconf/vendor/ruamel/yaml/comments.py
++++ /dev/null
+@@ -1,485 +0,0 @@
+-from __future__ import absolute_import,print_function
+-_G='_od'
+-_F='CommentedMap'
+-_E='# '
+-_D=True
+-_C='\n'
+-_B=False
+-_A=None
+-import sys,copy
+-from .compat import ordereddict
+-from .compat import PY2,string_types,MutableSliceableSequence
+-from .scalarstring import ScalarString
+-from .anchor import Anchor
+-if PY2:from collections import MutableSet,Sized,Set,Mapping
+-else:from collections.abc import MutableSet,Sized,Set,Mapping
+-if _B:from typing import Any,Dict,Optional,List,Union,Optional,Iterator
+-__all__=['CommentedSeq','CommentedKeySeq',_F,'CommentedOrderedMap','CommentedSet','comment_attrib','merge_attrib']
+-comment_attrib='_yaml_comment'
+-format_attrib='_yaml_format'
+-line_col_attrib='_yaml_line_col'
+-merge_attrib='_yaml_merge'
+-tag_attrib='_yaml_tag'
+-class Comment:
+-	__slots__='comment','_items','_end','_start';attrib=comment_attrib
+-	def __init__(A):A.comment=_A;A._items={};A._end=[]
+-	def __str__(A):
+-		if bool(A._end):B=',\n  end='+str(A._end)
+-		else:B=''
+-		return 'Comment(comment={0},\n  items={1}{2})'.format(A.comment,A._items,B)
+-	@property
+-	def items(self):return self._items
+-	@property
+-	def end(self):return self._end
+-	@end.setter
+-	def end(self,value):self._end=value
+-	@property
+-	def start(self):return self._start
+-	@start.setter
+-	def start(self,value):self._start=value
+-def NoComment():0
+-class Format:
+-	__slots__='_flow_style',;attrib=format_attrib
+-	def __init__(A):A._flow_style=_A
+-	def set_flow_style(A):A._flow_style=_D
+-	def set_block_style(A):A._flow_style=_B
+-	def flow_style(A,default=_A):
+-		if A._flow_style is _A:return default
+-		return A._flow_style
+-class LineCol:
+-	attrib=line_col_attrib
+-	def __init__(A):A.line=_A;A.col=_A;A.data=_A
+-	def add_kv_line_col(A,key,data):
+-		if A.data is _A:A.data={}
+-		A.data[key]=data
+-	def key(A,k):return A._kv(k,0,1)
+-	def value(A,k):return A._kv(k,2,3)
+-	def _kv(A,k,x0,x1):
+-		if A.data is _A:return _A
+-		B=A.data[k];return B[x0],B[x1]
+-	def item(A,idx):
+-		if A.data is _A:return _A
+-		return A.data[idx][0],A.data[idx][1]
+-	def add_idx_line_col(A,key,data):
+-		if A.data is _A:A.data={}
+-		A.data[key]=data
+-class Tag:
+-	__slots__='value',;attrib=tag_attrib
+-	def __init__(A):A.value=_A
+-	def __repr__(A):return '{0.__class__.__name__}({0.value!r})'.format(A)
+-class CommentedBase:
+-	@property
+-	def ca(self):
+-		A=self
+-		if not hasattr(A,Comment.attrib):setattr(A,Comment.attrib,Comment())
+-		return getattr(A,Comment.attrib)
+-	def yaml_end_comment_extend(A,comment,clear=_B):
+-		B=comment
+-		if B is _A:return
+-		if clear or A.ca.end is _A:A.ca.end=[]
+-		A.ca.end.extend(B)
+-	def yaml_key_comment_extend(C,key,comment,clear=_B):
+-		A=comment;B=C.ca._items.setdefault(key,[_A,_A,_A,_A])
+-		if clear or B[1]is _A:
+-			if A[1]is not _A:assert isinstance(A[1],list)
+-			B[1]=A[1]
+-		else:B[1].extend(A[0])
+-		B[0]=A[0]
+-	def yaml_value_comment_extend(C,key,comment,clear=_B):
+-		A=comment;B=C.ca._items.setdefault(key,[_A,_A,_A,_A])
+-		if clear or B[3]is _A:
+-			if A[1]is not _A:assert isinstance(A[1],list)
+-			B[3]=A[1]
+-		else:B[3].extend(A[0])
+-		B[2]=A[0]
+-	def yaml_set_start_comment(B,comment,indent=0):
+-		A=comment;from .error import CommentMark as C;from .tokens import CommentToken as D;E=B._yaml_get_pre_comment()
+-		if A[-1]==_C:A=A[:-1]
+-		F=C(indent)
+-		for G in A.split(_C):E.append(D(_E+G+_C,F,_A))
+-	def yaml_set_comment_before_after_key(J,key,before=_A,indent=0,after=_A,after_indent=_A):
+-		H=indent;E=after_indent;B=after;A=before;from dynaconf.vendor.ruamel.yaml.error import CommentMark as I;from dynaconf.vendor.ruamel.yaml.tokens import CommentToken as K
+-		def F(s,mark):return K((_E if s else'')+s+_C,mark,_A)
+-		if E is _A:E=H+2
+-		if A and len(A)>1 and A[-1]==_C:A=A[:-1]
+-		if B and B[-1]==_C:B=B[:-1]
+-		D=I(H);C=J.ca.items.setdefault(key,[_A,[],_A,_A])
+-		if A==_C:C[1].append(F('',D))
+-		elif A:
+-			for G in A.split(_C):C[1].append(F(G,D))
+-		if B:
+-			D=I(E)
+-			if C[3]is _A:C[3]=[]
+-			for G in B.split(_C):C[3].append(F(G,D))
+-	@property
+-	def fa(self):
+-		A=self
+-		if not hasattr(A,Format.attrib):setattr(A,Format.attrib,Format())
+-		return getattr(A,Format.attrib)
+-	def yaml_add_eol_comment(C,comment,key=NoComment,column=_A):
+-		H='#';B=column;A=comment;from .tokens import CommentToken as D;from .error import CommentMark as E
+-		if B is _A:
+-			try:B=C._yaml_get_column(key)
+-			except AttributeError:B=0
+-		if A[0]!=H:A=_E+A
+-		if B is _A:
+-			if A[0]==H:A=' '+A;B=0
+-		F=E(B);G=[D(A,F,_A),_A];C._yaml_add_eol_comment(G,key=key)
+-	@property
+-	def lc(self):
+-		A=self
+-		if not hasattr(A,LineCol.attrib):setattr(A,LineCol.attrib,LineCol())
+-		return getattr(A,LineCol.attrib)
+-	def _yaml_set_line_col(A,line,col):A.lc.line=line;A.lc.col=col
+-	def _yaml_set_kv_line_col(A,key,data):A.lc.add_kv_line_col(key,data)
+-	def _yaml_set_idx_line_col(A,key,data):A.lc.add_idx_line_col(key,data)
+-	@property
+-	def anchor(self):
+-		A=self
+-		if not hasattr(A,Anchor.attrib):setattr(A,Anchor.attrib,Anchor())
+-		return getattr(A,Anchor.attrib)
+-	def yaml_anchor(A):
+-		if not hasattr(A,Anchor.attrib):return _A
+-		return A.anchor
+-	def yaml_set_anchor(A,value,always_dump=_B):A.anchor.value=value;A.anchor.always_dump=always_dump
+-	@property
+-	def tag(self):
+-		A=self
+-		if not hasattr(A,Tag.attrib):setattr(A,Tag.attrib,Tag())
+-		return getattr(A,Tag.attrib)
+-	def yaml_set_tag(A,value):A.tag.value=value
+-	def copy_attributes(B,t,memo=_A):
+-		for A in [Comment.attrib,Format.attrib,LineCol.attrib,Anchor.attrib,Tag.attrib,merge_attrib]:
+-			if hasattr(B,A):
+-				if memo is not _A:setattr(t,A,copy.deepcopy(getattr(B,A,memo)))
+-				else:setattr(t,A,getattr(B,A))
+-	def _yaml_add_eol_comment(A,comment,key):raise NotImplementedError
+-	def _yaml_get_pre_comment(A):raise NotImplementedError
+-	def _yaml_get_column(A,key):raise NotImplementedError
+-class CommentedSeq(MutableSliceableSequence,list,CommentedBase):
+-	__slots__=Comment.attrib,'_lst'
+-	def __init__(A,*B,**C):list.__init__(A,*B,**C)
+-	def __getsingleitem__(A,idx):return list.__getitem__(A,idx)
+-	def __setsingleitem__(B,idx,value):
+-		C=idx;A=value
+-		if C<len(B):
+-			if isinstance(A,string_types)and not isinstance(A,ScalarString)and isinstance(B[C],ScalarString):A=type(B[C])(A)
+-		list.__setitem__(B,C,A)
+-	def __delsingleitem__(A,idx=_A):
+-		B=idx;list.__delitem__(A,B);A.ca.items.pop(B,_A)
+-		for C in sorted(A.ca.items):
+-			if C<B:continue
+-			A.ca.items[C-1]=A.ca.items.pop(C)
+-	def __len__(A):return list.__len__(A)
+-	def insert(A,idx,val):
+-		list.insert(A,idx,val)
+-		for B in sorted(A.ca.items,reverse=_D):
+-			if B<idx:break
+-			A.ca.items[B+1]=A.ca.items.pop(B)
+-	def extend(A,val):list.extend(A,val)
+-	def __eq__(A,other):return list.__eq__(A,other)
+-	def _yaml_add_comment(A,comment,key=NoComment):
+-		B=comment
+-		if key is not NoComment:A.yaml_key_comment_extend(key,B)
+-		else:A.ca.comment=B
+-	def _yaml_add_eol_comment(A,comment,key):A._yaml_add_comment(comment,key=key)
+-	def _yaml_get_columnX(A,key):return A.ca.items[key][0].start_mark.column
+-	def _yaml_get_column(A,key):
+-		C=key;E=_A;B=_A;F,G=C-1,C+1
+-		if F in A.ca.items:B=F
+-		elif G in A.ca.items:B=G
+-		else:
+-			for (D,H) in enumerate(A):
+-				if D>=C:break
+-				if D not in A.ca.items:continue
+-				B=D
+-		if B is not _A:E=A._yaml_get_columnX(B)
+-		return E
+-	def _yaml_get_pre_comment(A):
+-		B=[]
+-		if A.ca.comment is _A:A.ca.comment=[_A,B]
+-		else:A.ca.comment[1]=B
+-		return B
+-	def __deepcopy__(A,memo):
+-		C=memo;B=A.__class__();C[id(A)]=B
+-		for D in A:B.append(copy.deepcopy(D,C));A.copy_attributes(B,memo=C)
+-		return B
+-	def __add__(A,other):return list.__add__(A,other)
+-	def sort(A,key=_A,reverse=_B):
+-		C=reverse
+-		if key is _A:B=sorted(zip(A,range(len(A))),reverse=C);list.__init__(A,[A[0]for A in B])
+-		else:B=sorted(zip(map(key,list.__iter__(A)),range(len(A))),reverse=C);list.__init__(A,[list.__getitem__(A,C[1])for C in B])
+-		D=A.ca.items;A.ca._items={}
+-		for (F,G) in enumerate(B):
+-			E=G[1]
+-			if E in D:A.ca.items[F]=D[E]
+-	def __repr__(A):return list.__repr__(A)
+-class CommentedKeySeq(tuple,CommentedBase):
+-	def _yaml_add_comment(A,comment,key=NoComment):
+-		B=comment
+-		if key is not NoComment:A.yaml_key_comment_extend(key,B)
+-		else:A.ca.comment=B
+-	def _yaml_add_eol_comment(A,comment,key):A._yaml_add_comment(comment,key=key)
+-	def _yaml_get_columnX(A,key):return A.ca.items[key][0].start_mark.column
+-	def _yaml_get_column(A,key):
+-		C=key;E=_A;B=_A;F,G=C-1,C+1
+-		if F in A.ca.items:B=F
+-		elif G in A.ca.items:B=G
+-		else:
+-			for (D,H) in enumerate(A):
+-				if D>=C:break
+-				if D not in A.ca.items:continue
+-				B=D
+-		if B is not _A:E=A._yaml_get_columnX(B)
+-		return E
+-	def _yaml_get_pre_comment(A):
+-		B=[]
+-		if A.ca.comment is _A:A.ca.comment=[_A,B]
+-		else:A.ca.comment[1]=B
+-		return B
+-class CommentedMapView(Sized):
+-	__slots__='_mapping',
+-	def __init__(A,mapping):A._mapping=mapping
+-	def __len__(A):B=len(A._mapping);return B
+-class CommentedMapKeysView(CommentedMapView,Set):
+-	__slots__=()
+-	@classmethod
+-	def _from_iterable(A,it):return set(it)
+-	def __contains__(A,key):return key in A._mapping
+-	def __iter__(A):
+-		for B in A._mapping:yield B
+-class CommentedMapItemsView(CommentedMapView,Set):
+-	__slots__=()
+-	@classmethod
+-	def _from_iterable(A,it):return set(it)
+-	def __contains__(A,item):
+-		B,C=item
+-		try:D=A._mapping[B]
+-		except KeyError:return _B
+-		else:return D==C
+-	def __iter__(A):
+-		for B in A._mapping._keys():yield(B,A._mapping[B])
+-class CommentedMapValuesView(CommentedMapView):
+-	__slots__=()
+-	def __contains__(A,value):
+-		for B in A._mapping:
+-			if value==A._mapping[B]:return _D
+-		return _B
+-	def __iter__(A):
+-		for B in A._mapping._keys():yield A._mapping[B]
+-class CommentedMap(ordereddict,CommentedBase):
+-	__slots__=Comment.attrib,'_ok','_ref'
+-	def __init__(A,*B,**C):A._ok=set();A._ref=[];ordereddict.__init__(A,*B,**C)
+-	def _yaml_add_comment(A,comment,key=NoComment,value=NoComment):
+-		C=value;B=comment
+-		if key is not NoComment:A.yaml_key_comment_extend(key,B);return
+-		if C is not NoComment:A.yaml_value_comment_extend(C,B)
+-		else:A.ca.comment=B
+-	def _yaml_add_eol_comment(A,comment,key):A._yaml_add_comment(comment,value=key)
+-	def _yaml_get_columnX(A,key):return A.ca.items[key][2].start_mark.column
+-	def _yaml_get_column(A,key):
+-		E=key;H=_A;B=_A;C,F,I=_A,_A,_A
+-		for D in A:
+-			if C is not _A and D!=E:F=D;break
+-			if D==E:C=I
+-			I=D
+-		if C in A.ca.items:B=C
+-		elif F in A.ca.items:B=F
+-		else:
+-			for G in A:
+-				if G>=E:break
+-				if G not in A.ca.items:continue
+-				B=G
+-		if B is not _A:H=A._yaml_get_columnX(B)
+-		return H
+-	def _yaml_get_pre_comment(A):
+-		B=[]
+-		if A.ca.comment is _A:A.ca.comment=[_A,B]
+-		else:A.ca.comment[1]=B
+-		return B
+-	def update(B,vals):
+-		A=vals
+-		try:ordereddict.update(B,A)
+-		except TypeError:
+-			for C in A:B[C]=A[C]
+-		try:B._ok.update(A.keys())
+-		except AttributeError:
+-			for C in A:B._ok.add(C[0])
+-	def insert(A,pos,key,value,comment=_A):
+-		C=comment;B=key;ordereddict.insert(A,pos,B,value);A._ok.add(B)
+-		if C is not _A:A.yaml_add_eol_comment(C,key=B)
+-	def mlget(C,key,default=_A,list_ok=_B):
+-		D=list_ok;B=default;A=key
+-		if not isinstance(A,list):return C.get(A,B)
+-		def E(key_list,level,d):
+-			B=level;A=key_list
+-			if not D:assert isinstance(d,dict)
+-			if B>=len(A):
+-				if B>len(A):raise IndexError
+-				return d[A[B-1]]
+-			return E(A,B+1,d[A[B-1]])
+-		try:return E(A,1,C)
+-		except KeyError:return B
+-		except (TypeError,IndexError):
+-			if not D:raise
+-			return B
+-	def __getitem__(B,key):
+-		A=key
+-		try:return ordereddict.__getitem__(B,A)
+-		except KeyError:
+-			for C in getattr(B,merge_attrib,[]):
+-				if A in C[1]:return C[1][A]
+-			raise
+-	def __setitem__(A,key,value):
+-		C=value;B=key
+-		if B in A:
+-			if isinstance(C,string_types)and not isinstance(C,ScalarString)and isinstance(A[B],ScalarString):C=type(A[B])(C)
+-		ordereddict.__setitem__(A,B,C);A._ok.add(B)
+-	def _unmerged_contains(A,key):
+-		if key in A._ok:return _D
+-		return _A
+-	def __contains__(A,key):return bool(ordereddict.__contains__(A,key))
+-	def get(A,key,default=_A):
+-		try:return A.__getitem__(key)
+-		except:return default
+-	def __repr__(A):return ordereddict.__repr__(A).replace(_F,'ordereddict')
+-	def non_merged_items(A):
+-		for B in ordereddict.__iter__(A):
+-			if B in A._ok:yield(B,ordereddict.__getitem__(A,B))
+-	def __delitem__(A,key):
+-		B=key;A._ok.discard(B);ordereddict.__delitem__(A,B)
+-		for C in A._ref:C.update_key_value(B)
+-	def __iter__(A):
+-		for B in ordereddict.__iter__(A):yield B
+-	def _keys(A):
+-		for B in ordereddict.__iter__(A):yield B
+-	def __len__(A):return int(ordereddict.__len__(A))
+-	def __eq__(A,other):return bool(dict(A)==other)
+-	if PY2:
+-		def keys(A):return list(A._keys())
+-		def iterkeys(A):return A._keys()
+-		def viewkeys(A):return CommentedMapKeysView(A)
+-	else:
+-		def keys(A):return CommentedMapKeysView(A)
+-	if PY2:
+-		def _values(A):
+-			for B in ordereddict.__iter__(A):yield ordereddict.__getitem__(A,B)
+-		def values(A):return list(A._values())
+-		def itervalues(A):return A._values()
+-		def viewvalues(A):return CommentedMapValuesView(A)
+-	else:
+-		def values(A):return CommentedMapValuesView(A)
+-	def _items(A):
+-		for B in ordereddict.__iter__(A):yield(B,ordereddict.__getitem__(A,B))
+-	if PY2:
+-		def items(A):return list(A._items())
+-		def iteritems(A):return A._items()
+-		def viewitems(A):return CommentedMapItemsView(A)
+-	else:
+-		def items(A):return CommentedMapItemsView(A)
+-	@property
+-	def merge(self):
+-		A=self
+-		if not hasattr(A,merge_attrib):setattr(A,merge_attrib,[])
+-		return getattr(A,merge_attrib)
+-	def copy(A):
+-		B=type(A)()
+-		for (C,D) in A._items():B[C]=D
+-		A.copy_attributes(B);return B
+-	def add_referent(A,cm):
+-		if cm not in A._ref:A._ref.append(cm)
+-	def add_yaml_merge(A,value):
+-		C=value
+-		for B in C:
+-			B[1].add_referent(A)
+-			for (D,B) in B[1].items():
+-				if ordereddict.__contains__(A,D):continue
+-				ordereddict.__setitem__(A,D,B)
+-		A.merge.extend(C)
+-	def update_key_value(B,key):
+-		A=key
+-		if A in B._ok:return
+-		for C in B.merge:
+-			if A in C[1]:ordereddict.__setitem__(B,A,C[1][A]);return
+-		ordereddict.__delitem__(B,A)
+-	def __deepcopy__(A,memo):
+-		C=memo;B=A.__class__();C[id(A)]=B
+-		for D in A:B[D]=copy.deepcopy(A[D],C)
+-		A.copy_attributes(B,memo=C);return B
+-@classmethod
+-def raise_immutable(cls,*A,**B):raise TypeError('{} objects are immutable'.format(cls.__name__))
+-class CommentedKeyMap(CommentedBase,Mapping):
+-	__slots__=Comment.attrib,_G
+-	def __init__(A,*B,**C):
+-		if hasattr(A,_G):raise_immutable(A)
+-		try:A._od=ordereddict(*B,**C)
+-		except TypeError:
+-			if PY2:A._od=ordereddict(B[0].items())
+-			else:raise
+-	__delitem__=__setitem__=clear=pop=popitem=setdefault=update=raise_immutable
+-	def __getitem__(A,index):return A._od[index]
+-	def __iter__(A):
+-		for B in A._od.__iter__():yield B
+-	def __len__(A):return len(A._od)
+-	def __hash__(A):return hash(tuple(A.items()))
+-	def __repr__(A):
+-		if not hasattr(A,merge_attrib):return A._od.__repr__()
+-		return'ordereddict('+repr(list(A._od.items()))+')'
+-	@classmethod
+-	def fromkeys(A,v=_A):return CommentedKeyMap(dict.fromkeys(A,v))
+-	def _yaml_add_comment(A,comment,key=NoComment):
+-		B=comment
+-		if key is not NoComment:A.yaml_key_comment_extend(key,B)
+-		else:A.ca.comment=B
+-	def _yaml_add_eol_comment(A,comment,key):A._yaml_add_comment(comment,key=key)
+-	def _yaml_get_columnX(A,key):return A.ca.items[key][0].start_mark.column
+-	def _yaml_get_column(A,key):
+-		C=key;E=_A;B=_A;F,G=C-1,C+1
+-		if F in A.ca.items:B=F
+-		elif G in A.ca.items:B=G
+-		else:
+-			for (D,H) in enumerate(A):
+-				if D>=C:break
+-				if D not in A.ca.items:continue
+-				B=D
+-		if B is not _A:E=A._yaml_get_columnX(B)
+-		return E
+-	def _yaml_get_pre_comment(A):
+-		B=[]
+-		if A.ca.comment is _A:A.ca.comment=[_A,B]
+-		else:A.ca.comment[1]=B
+-		return B
+-class CommentedOrderedMap(CommentedMap):__slots__=Comment.attrib,
+-class CommentedSet(MutableSet,CommentedBase):
+-	__slots__=Comment.attrib,'odict'
+-	def __init__(A,values=_A):
+-		B=values;A.odict=ordereddict();MutableSet.__init__(A)
+-		if B is not _A:A|=B
+-	def _yaml_add_comment(A,comment,key=NoComment,value=NoComment):
+-		C=value;B=comment
+-		if key is not NoComment:A.yaml_key_comment_extend(key,B);return
+-		if C is not NoComment:A.yaml_value_comment_extend(C,B)
+-		else:A.ca.comment=B
+-	def _yaml_add_eol_comment(A,comment,key):A._yaml_add_comment(comment,value=key)
+-	def add(A,value):A.odict[value]=_A
+-	def discard(A,value):del A.odict[value]
+-	def __contains__(A,x):return x in A.odict
+-	def __iter__(A):
+-		for B in A.odict:yield B
+-	def __len__(A):return len(A.odict)
+-	def __repr__(A):return 'set({0!r})'.format(A.odict.keys())
+-class TaggedScalar(CommentedBase):
+-	def __init__(A,value=_A,style=_A,tag=_A):
+-		A.value=value;A.style=style
+-		if tag is not _A:A.yaml_set_tag(tag)
+-	def __str__(A):return A.value
+-def dump_comments(d,name='',sep='.',out=sys.stdout):
+-	G='ca';E='{}\n';D=out;C=sep;A=name
+-	if isinstance(d,dict)and hasattr(d,G):
+-		if A:sys.stdout.write(E.format(A))
+-		D.write(E.format(d.ca))
+-		for B in d:dump_comments(d[B],name=A+C+B if A else B,sep=C,out=D)
+-	elif isinstance(d,list)and hasattr(d,G):
+-		if A:sys.stdout.write(E.format(A))
+-		D.write(E.format(d.ca))
+-		for (F,B) in enumerate(d):dump_comments(B,name=A+C+str(F)if A else str(F),sep=C,out=D)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/compat.py b/dynaconf/vendor/ruamel/yaml/compat.py
+deleted file mode 100644
+index 0512ad7..0000000
+--- a/dynaconf/vendor/ruamel/yaml/compat.py
++++ /dev/null
+@@ -1,120 +0,0 @@
+-from __future__ import print_function
+-_D='RUAMELDEBUG'
+-_C=True
+-_B=False
+-_A=None
+-import sys,os,types,traceback
+-from abc import abstractmethod
+-if _B:from typing import Any,Dict,Optional,List,Union,BinaryIO,IO,Text,Tuple,Optional
+-_DEFAULT_YAML_VERSION=1,2
+-try:from ruamel.ordereddict import ordereddict
+-except:
+-	try:from collections import OrderedDict
+-	except ImportError:from ordereddict import OrderedDict
+-	class ordereddict(OrderedDict):
+-		if not hasattr(OrderedDict,'insert'):
+-			def insert(A,pos,key,value):
+-				C=value
+-				if pos>=len(A):A[key]=C;return
+-				B=ordereddict();B.update(A)
+-				for E in B:del A[E]
+-				for (F,D) in enumerate(B):
+-					if pos==F:A[key]=C
+-					A[D]=B[D]
+-PY2=sys.version_info[0]==2
+-PY3=sys.version_info[0]==3
+-if PY3:
+-	def utf8(s):return s
+-	def to_str(s):return s
+-	def to_unicode(s):return s
+-else:
+-	if _B:unicode=str
+-	def utf8(s):return s.encode('utf-8')
+-	def to_str(s):return str(s)
+-	def to_unicode(s):return unicode(s)
+-if PY3:string_types=str;integer_types=int;class_types=type;text_type=str;binary_type=bytes;MAXSIZE=sys.maxsize;unichr=chr;import io;StringIO=io.StringIO;BytesIO=io.BytesIO;no_limit_int=int;from collections.abc import Hashable,MutableSequence,MutableMapping,Mapping
+-else:string_types=basestring;integer_types=int,long;class_types=type,types.ClassType;text_type=unicode;binary_type=str;unichr=unichr;from StringIO import StringIO as _StringIO;StringIO=_StringIO;import cStringIO;BytesIO=cStringIO.StringIO;no_limit_int=long;from collections import Hashable,MutableSequence,MutableMapping,Mapping
+-if _B:StreamType=Any;StreamTextType=StreamType;VersionType=Union[List[int],str,Tuple[int,int]]
+-if PY3:builtins_module='builtins'
+-else:builtins_module='__builtin__'
+-UNICODE_SIZE=4 if sys.maxunicode>65535 else 2
+-def with_metaclass(meta,*A):return meta('NewBase',A,{})
+-DBG_TOKEN=1
+-DBG_EVENT=2
+-DBG_NODE=4
+-_debug=_A
+-if _D in os.environ:
+-	_debugx=os.environ.get(_D)
+-	if _debugx is _A:_debug=0
+-	else:_debug=int(_debugx)
+-if bool(_debug):
+-	class ObjectCounter:
+-		def __init__(A):A.map={}
+-		def __call__(A,k):A.map[k]=A.map.get(k,0)+1
+-		def dump(A):
+-			for B in sorted(A.map):sys.stdout.write('{} -> {}'.format(B,A.map[B]))
+-	object_counter=ObjectCounter()
+-def dbg(val=_A):
+-	global _debug
+-	if _debug is _A:
+-		A=os.environ.get('YAMLDEBUG')
+-		if A is _A:_debug=0
+-		else:_debug=int(A)
+-	if val is _A:return _debug
+-	return _debug&val
+-class Nprint:
+-	def __init__(A,file_name=_A):A._max_print=_A;A._count=_A;A._file_name=file_name
+-	def __call__(A,*E,**F):
+-		if not bool(_debug):return
+-		B=sys.stdout if A._file_name is _A else open(A._file_name,'a');C=print;D=F.copy();D['file']=B;C(*E,**D);B.flush()
+-		if A._max_print is not _A:
+-			if A._count is _A:A._count=A._max_print
+-			A._count-=1
+-			if A._count==0:C('forced exit\n');traceback.print_stack();B.flush();sys.exit(0)
+-		if A._file_name:B.close()
+-	def set_max_print(A,i):A._max_print=i;A._count=_A
+-nprint=Nprint()
+-nprintf=Nprint('/var/tmp/ruamel.yaml.log')
+-def check_namespace_char(ch):
+-	A=ch
+-	if'!'<=A<='~':return _C
+-	if'\xa0'<=A<='\ud7ff':return _C
+-	if'\ue000'<=A<='�'and A!='\ufeff':return _C
+-	if'𐀀'<=A<='\U0010ffff':return _C
+-	return _B
+-def check_anchorname_char(ch):
+-	if ch in',[]{}':return _B
+-	return check_namespace_char(ch)
+-def version_tnf(t1,t2=_A):
+-	from dynaconf.vendor.ruamel.yaml import version_info as A
+-	if A<t1:return _C
+-	if t2 is not _A and A<t2:return _A
+-	return _B
+-class MutableSliceableSequence(MutableSequence):
+-	__slots__=()
+-	def __getitem__(A,index):
+-		B=index
+-		if not isinstance(B,slice):return A.__getsingleitem__(B)
+-		return type(A)([A[C]for C in range(*B.indices(len(A)))])
+-	def __setitem__(C,index,value):
+-		B=value;A=index
+-		if not isinstance(A,slice):return C.__setsingleitem__(A,B)
+-		assert iter(B)
+-		if A.step is _A:
+-			del C[A.start:A.stop]
+-			for F in reversed(B):C.insert(0 if A.start is _A else A.start,F)
+-		else:
+-			D=A.indices(len(C));E=(D[1]-D[0]-1)//D[2]+1
+-			if E<len(B):raise TypeError('too many elements in value {} < {}'.format(E,len(B)))
+-			elif E>len(B):raise TypeError('not enough elements in value {} > {}'.format(E,len(B)))
+-			for (G,H) in enumerate(range(*D)):C[H]=B[G]
+-	def __delitem__(A,index):
+-		B=index
+-		if not isinstance(B,slice):return A.__delsingleitem__(B)
+-		for C in reversed(range(*B.indices(len(A)))):del A[C]
+-	@abstractmethod
+-	def __getsingleitem__(self,index):raise IndexError
+-	@abstractmethod
+-	def __setsingleitem__(self,index,value):raise IndexError
+-	@abstractmethod
+-	def __delsingleitem__(self,index):raise IndexError
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/composer.py b/dynaconf/vendor/ruamel/yaml/composer.py
+deleted file mode 100644
+index 9a0f8f0..0000000
+--- a/dynaconf/vendor/ruamel/yaml/composer.py
++++ /dev/null
+@@ -1,82 +0,0 @@
+-from __future__ import absolute_import,print_function
+-_B='typ'
+-_A=None
+-import warnings
+-from .error import MarkedYAMLError,ReusedAnchorWarning
+-from .compat import utf8,nprint,nprintf
+-from .events import StreamStartEvent,StreamEndEvent,MappingStartEvent,MappingEndEvent,SequenceStartEvent,SequenceEndEvent,AliasEvent,ScalarEvent
+-from .nodes import MappingNode,ScalarNode,SequenceNode
+-if False:from typing import Any,Dict,Optional,List
+-__all__=['Composer','ComposerError']
+-class ComposerError(MarkedYAMLError):0
+-class Composer:
+-	def __init__(A,loader=_A):
+-		A.loader=loader
+-		if A.loader is not _A and getattr(A.loader,'_composer',_A)is _A:A.loader._composer=A
+-		A.anchors={}
+-	@property
+-	def parser(self):
+-		A=self
+-		if hasattr(A.loader,_B):A.loader.parser
+-		return A.loader._parser
+-	@property
+-	def resolver(self):
+-		A=self
+-		if hasattr(A.loader,_B):A.loader.resolver
+-		return A.loader._resolver
+-	def check_node(A):
+-		if A.parser.check_event(StreamStartEvent):A.parser.get_event()
+-		return not A.parser.check_event(StreamEndEvent)
+-	def get_node(A):
+-		if not A.parser.check_event(StreamEndEvent):return A.compose_document()
+-	def get_single_node(A):
+-		A.parser.get_event();B=_A
+-		if not A.parser.check_event(StreamEndEvent):B=A.compose_document()
+-		if not A.parser.check_event(StreamEndEvent):C=A.parser.get_event();raise ComposerError('expected a single document in the stream',B.start_mark,'but found another document',C.start_mark)
+-		A.parser.get_event();return B
+-	def compose_document(A):A.parser.get_event();B=A.compose_node(_A,_A);A.parser.get_event();A.anchors={};return B
+-	def compose_node(A,parent,index):
+-		if A.parser.check_event(AliasEvent):
+-			C=A.parser.get_event();D=C.anchor
+-			if D not in A.anchors:raise ComposerError(_A,_A,'found undefined alias %r'%utf8(D),C.start_mark)
+-			return A.anchors[D]
+-		C=A.parser.peek_event();B=C.anchor
+-		if B is not _A:
+-			if B in A.anchors:F='\nfound duplicate anchor {!r}\nfirst occurrence {}\nsecond occurrence {}'.format(B,A.anchors[B].start_mark,C.start_mark);warnings.warn(F,ReusedAnchorWarning)
+-		A.resolver.descend_resolver(parent,index)
+-		if A.parser.check_event(ScalarEvent):E=A.compose_scalar_node(B)
+-		elif A.parser.check_event(SequenceStartEvent):E=A.compose_sequence_node(B)
+-		elif A.parser.check_event(MappingStartEvent):E=A.compose_mapping_node(B)
+-		A.resolver.ascend_resolver();return E
+-	def compose_scalar_node(C,anchor):
+-		D=anchor;A=C.parser.get_event();B=A.tag
+-		if B is _A or B=='!':B=C.resolver.resolve(ScalarNode,A.value,A.implicit)
+-		E=ScalarNode(B,A.value,A.start_mark,A.end_mark,style=A.style,comment=A.comment,anchor=D)
+-		if D is not _A:C.anchors[D]=E
+-		return E
+-	def compose_sequence_node(B,anchor):
+-		F=anchor;C=B.parser.get_event();D=C.tag
+-		if D is _A or D=='!':D=B.resolver.resolve(SequenceNode,_A,C.implicit)
+-		A=SequenceNode(D,[],C.start_mark,_A,flow_style=C.flow_style,comment=C.comment,anchor=F)
+-		if F is not _A:B.anchors[F]=A
+-		G=0
+-		while not B.parser.check_event(SequenceEndEvent):A.value.append(B.compose_node(A,G));G+=1
+-		E=B.parser.get_event()
+-		if A.flow_style is True and E.comment is not _A:
+-			if A.comment is not _A:nprint('Warning: unexpected end_event commment in sequence node {}'.format(A.flow_style))
+-			A.comment=E.comment
+-		A.end_mark=E.end_mark;B.check_end_doc_comment(E,A);return A
+-	def compose_mapping_node(B,anchor):
+-		F=anchor;C=B.parser.get_event();D=C.tag
+-		if D is _A or D=='!':D=B.resolver.resolve(MappingNode,_A,C.implicit)
+-		A=MappingNode(D,[],C.start_mark,_A,flow_style=C.flow_style,comment=C.comment,anchor=F)
+-		if F is not _A:B.anchors[F]=A
+-		while not B.parser.check_event(MappingEndEvent):G=B.compose_node(A,_A);H=B.compose_node(A,G);A.value.append((G,H))
+-		E=B.parser.get_event()
+-		if A.flow_style is True and E.comment is not _A:A.comment=E.comment
+-		A.end_mark=E.end_mark;B.check_end_doc_comment(E,A);return A
+-	def check_end_doc_comment(C,end_event,node):
+-		B=node;A=end_event
+-		if A.comment and A.comment[1]:
+-			if B.comment is _A:B.comment=[_A,_A]
+-			assert not isinstance(B,ScalarEvent);B.comment.append(A.comment[1]);A.comment[1]=_A
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/configobjwalker.py b/dynaconf/vendor/ruamel/yaml/configobjwalker.py
+deleted file mode 100644
+index ba9cafe..0000000
+--- a/dynaconf/vendor/ruamel/yaml/configobjwalker.py
++++ /dev/null
+@@ -1,4 +0,0 @@
+-import warnings
+-from .util import configobj_walker as new_configobj_walker
+-if False:from typing import Any
+-def configobj_walker(cfg):warnings.warn('configobj_walker has moved to ruamel.yaml.util, please update your code');return new_configobj_walker(cfg)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/constructor.py b/dynaconf/vendor/ruamel/yaml/constructor.py
+deleted file mode 100644
+index 2400e6b..0000000
+--- a/dynaconf/vendor/ruamel/yaml/constructor.py
++++ /dev/null
+@@ -1,728 +0,0 @@
+-from __future__ import print_function,absolute_import,division
+-_AD='expected the empty value, but found %r'
+-_AC='cannot find module %r (%s)'
+-_AB='expected non-empty name appended to the tag'
+-_AA='tag:yaml.org,2002:map'
+-_A9='tag:yaml.org,2002:seq'
+-_A8='tag:yaml.org,2002:set'
+-_A7='tag:yaml.org,2002:pairs'
+-_A6='tag:yaml.org,2002:omap'
+-_A5='tag:yaml.org,2002:timestamp'
+-_A4='tag:yaml.org,2002:binary'
+-_A3='tag:yaml.org,2002:float'
+-_A2='tag:yaml.org,2002:int'
+-_A1='tag:yaml.org,2002:bool'
+-_A0='tag:yaml.org,2002:null'
+-_z='could not determine a constructor for the tag %r'
+-_y='second'
+-_x='minute'
+-_w='day'
+-_v='month'
+-_u='year'
+-_t='failed to construct timestamp from "{}"'
+-_s='decodebytes'
+-_r='failed to convert base64 data into ascii: %s'
+-_q='.nan'
+-_p='.inf'
+-_o='expected a mapping or list of mappings for merging, but found %s'
+-_n='expected a mapping for merging, but found %s'
+-_m='                        Duplicate keys will become an error in future releases, and are errors\n                        by default when using the new API.\n                        '
+-_l='\n                        To suppress this check see:\n                           http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys\n                        '
+-_k='tag:yaml.org,2002:merge'
+-_j='                    Duplicate keys will become an error in future releases, and are errors\n                    by default when using the new API.\n                    '
+-_i='\n                    To suppress this check see:\n                        http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys\n                    '
+-_h='expected a sequence node, but found %s'
+-_g='expected a scalar node, but found %s'
+-_f='typ'
+-_e='while constructing a Python module'
+-_d='expected a single mapping item, but found %d items'
+-_c='expected a mapping of length 1, but found %s'
+-_b='expected a sequence, but found %s'
+-_a='failed to decode base64 data: %s'
+-_Z='tag:yaml.org,2002:value'
+-_Y='found duplicate key "{}"'
+-_X='found unhashable key'
+-_W='found unacceptable key (%s)'
+-_V='__setstate__'
+-_U='tz_hour'
+-_T='hour'
+-_S='ascii'
+-_R='tag:yaml.org,2002:str'
+-_Q='utf-8'
+-_P='expected a mapping node, but found %s'
+-_O='tz_minute'
+-_N='e'
+-_M='+-'
+-_L='while constructing an ordered map'
+-_K='tz_sign'
+-_J='-'
+-_I='fraction'
+-_H='.'
+-_G=':'
+-_F='0'
+-_E='while constructing a mapping'
+-_D='_'
+-_C=True
+-_B=False
+-_A=None
+-import datetime,base64,binascii,re,sys,types,warnings
+-from .error import MarkedYAMLError,MarkedYAMLFutureWarning,MantissaNoDotYAML1_1Warning
+-from .nodes import *
+-from .nodes import SequenceNode,MappingNode,ScalarNode
+-from .compat import utf8,builtins_module,to_str,PY2,PY3,text_type,nprint,nprintf,version_tnf
+-from .compat import ordereddict,Hashable,MutableSequence
+-from .compat import MutableMapping
+-from .comments import *
+-from .comments import CommentedMap,CommentedOrderedMap,CommentedSet,CommentedKeySeq,CommentedSeq,TaggedScalar,CommentedKeyMap
+-from .scalarstring import SingleQuotedScalarString,DoubleQuotedScalarString,LiteralScalarString,FoldedScalarString,PlainScalarString,ScalarString
+-from .scalarint import ScalarInt,BinaryInt,OctalInt,HexInt,HexCapsInt
+-from .scalarfloat import ScalarFloat
+-from .scalarbool import ScalarBoolean
+-from .timestamp import TimeStamp
+-from .util import RegExp
+-if _B:from typing import Any,Dict,List,Set,Generator,Union,Optional
+-__all__=['BaseConstructor','SafeConstructor','Constructor','ConstructorError','RoundTripConstructor']
+-class ConstructorError(MarkedYAMLError):0
+-class DuplicateKeyFutureWarning(MarkedYAMLFutureWarning):0
+-class DuplicateKeyError(MarkedYAMLFutureWarning):0
+-class BaseConstructor:
+-	yaml_constructors={};yaml_multi_constructors={}
+-	def __init__(self,preserve_quotes=_A,loader=_A):
+-		self.loader=loader
+-		if self.loader is not _A and getattr(self.loader,'_constructor',_A)is _A:self.loader._constructor=self
+-		self.loader=loader;self.yaml_base_dict_type=dict;self.yaml_base_list_type=list;self.constructed_objects={};self.recursive_objects={};self.state_generators=[];self.deep_construct=_B;self._preserve_quotes=preserve_quotes;self.allow_duplicate_keys=version_tnf((0,15,1),(0,16))
+-	@property
+-	def composer(self):
+-		if hasattr(self.loader,_f):return self.loader.composer
+-		try:return self.loader._composer
+-		except AttributeError:sys.stdout.write('slt {}\n'.format(type(self)));sys.stdout.write('slc {}\n'.format(self.loader._composer));sys.stdout.write('{}\n'.format(dir(self)));raise
+-	@property
+-	def resolver(self):
+-		if hasattr(self.loader,_f):return self.loader.resolver
+-		return self.loader._resolver
+-	def check_data(self):return self.composer.check_node()
+-	def get_data(self):
+-		if self.composer.check_node():return self.construct_document(self.composer.get_node())
+-	def get_single_data(self):
+-		node=self.composer.get_single_node()
+-		if node is not _A:return self.construct_document(node)
+-		return _A
+-	def construct_document(self,node):
+-		data=self.construct_object(node)
+-		while bool(self.state_generators):
+-			state_generators=self.state_generators;self.state_generators=[]
+-			for generator in state_generators:
+-				for _dummy in generator:0
+-		self.constructed_objects={};self.recursive_objects={};self.deep_construct=_B;return data
+-	def construct_object(self,node,deep=_B):
+-		if node in self.constructed_objects:return self.constructed_objects[node]
+-		if deep:old_deep=self.deep_construct;self.deep_construct=_C
+-		if node in self.recursive_objects:return self.recursive_objects[node]
+-		self.recursive_objects[node]=_A;data=self.construct_non_recursive_object(node);self.constructed_objects[node]=data;del self.recursive_objects[node]
+-		if deep:self.deep_construct=old_deep
+-		return data
+-	def construct_non_recursive_object(self,node,tag=_A):
+-		constructor=_A;tag_suffix=_A
+-		if tag is _A:tag=node.tag
+-		if tag in self.yaml_constructors:constructor=self.yaml_constructors[tag]
+-		else:
+-			for tag_prefix in self.yaml_multi_constructors:
+-				if tag.startswith(tag_prefix):tag_suffix=tag[len(tag_prefix):];constructor=self.yaml_multi_constructors[tag_prefix];break
+-			else:
+-				if _A in self.yaml_multi_constructors:tag_suffix=tag;constructor=self.yaml_multi_constructors[_A]
+-				elif _A in self.yaml_constructors:constructor=self.yaml_constructors[_A]
+-				elif isinstance(node,ScalarNode):constructor=self.__class__.construct_scalar
+-				elif isinstance(node,SequenceNode):constructor=self.__class__.construct_sequence
+-				elif isinstance(node,MappingNode):constructor=self.__class__.construct_mapping
+-		if tag_suffix is _A:data=constructor(self,node)
+-		else:data=constructor(self,tag_suffix,node)
+-		if isinstance(data,types.GeneratorType):
+-			generator=data;data=next(generator)
+-			if self.deep_construct:
+-				for _dummy in generator:0
+-			else:self.state_generators.append(generator)
+-		return data
+-	def construct_scalar(self,node):
+-		if not isinstance(node,ScalarNode):raise ConstructorError(_A,_A,_g%node.id,node.start_mark)
+-		return node.value
+-	def construct_sequence(self,node,deep=_B):
+-		if not isinstance(node,SequenceNode):raise ConstructorError(_A,_A,_h%node.id,node.start_mark)
+-		return[self.construct_object(child,deep=deep)for child in node.value]
+-	def construct_mapping(self,node,deep=_B):
+-		if not isinstance(node,MappingNode):raise ConstructorError(_A,_A,_P%node.id,node.start_mark)
+-		total_mapping=self.yaml_base_dict_type()
+-		if getattr(node,'merge',_A)is not _A:todo=[(node.merge,_B),(node.value,_B)]
+-		else:todo=[(node.value,_C)]
+-		for (values,check) in todo:
+-			mapping=self.yaml_base_dict_type()
+-			for (key_node,value_node) in values:
+-				key=self.construct_object(key_node,deep=_C)
+-				if not isinstance(key,Hashable):
+-					if isinstance(key,list):key=tuple(key)
+-				if PY2:
+-					try:hash(key)
+-					except TypeError as exc:raise ConstructorError(_E,node.start_mark,_W%exc,key_node.start_mark)
+-				elif not isinstance(key,Hashable):raise ConstructorError(_E,node.start_mark,_X,key_node.start_mark)
+-				value=self.construct_object(value_node,deep=deep)
+-				if check:
+-					if self.check_mapping_key(node,key_node,mapping,key,value):mapping[key]=value
+-				else:mapping[key]=value
+-			total_mapping.update(mapping)
+-		return total_mapping
+-	def check_mapping_key(self,node,key_node,mapping,key,value):
+-		if key in mapping:
+-			if not self.allow_duplicate_keys:
+-				mk=mapping.get(key)
+-				if PY2:
+-					if isinstance(key,unicode):key=key.encode(_Q)
+-					if isinstance(value,unicode):value=value.encode(_Q)
+-					if isinstance(mk,unicode):mk=mk.encode(_Q)
+-				args=[_E,node.start_mark,'found duplicate key "{}" with value "{}" (original value: "{}")'.format(key,value,mk),key_node.start_mark,_i,_j]
+-				if self.allow_duplicate_keys is _A:warnings.warn(DuplicateKeyFutureWarning(*args))
+-				else:raise DuplicateKeyError(*args)
+-			return _B
+-		return _C
+-	def check_set_key(self,node,key_node,setting,key):
+-		if key in setting:
+-			if not self.allow_duplicate_keys:
+-				if PY2:
+-					if isinstance(key,unicode):key=key.encode(_Q)
+-				args=['while constructing a set',node.start_mark,_Y.format(key),key_node.start_mark,_i,_j]
+-				if self.allow_duplicate_keys is _A:warnings.warn(DuplicateKeyFutureWarning(*args))
+-				else:raise DuplicateKeyError(*args)
+-	def construct_pairs(self,node,deep=_B):
+-		if not isinstance(node,MappingNode):raise ConstructorError(_A,_A,_P%node.id,node.start_mark)
+-		pairs=[]
+-		for (key_node,value_node) in node.value:key=self.construct_object(key_node,deep=deep);value=self.construct_object(value_node,deep=deep);pairs.append((key,value))
+-		return pairs
+-	@classmethod
+-	def add_constructor(cls,tag,constructor):
+-		if'yaml_constructors'not in cls.__dict__:cls.yaml_constructors=cls.yaml_constructors.copy()
+-		cls.yaml_constructors[tag]=constructor
+-	@classmethod
+-	def add_multi_constructor(cls,tag_prefix,multi_constructor):
+-		if'yaml_multi_constructors'not in cls.__dict__:cls.yaml_multi_constructors=cls.yaml_multi_constructors.copy()
+-		cls.yaml_multi_constructors[tag_prefix]=multi_constructor
+-class SafeConstructor(BaseConstructor):
+-	def construct_scalar(self,node):
+-		if isinstance(node,MappingNode):
+-			for (key_node,value_node) in node.value:
+-				if key_node.tag==_Z:return self.construct_scalar(value_node)
+-		return BaseConstructor.construct_scalar(self,node)
+-	def flatten_mapping(self,node):
+-		merge=[];index=0
+-		while index<len(node.value):
+-			key_node,value_node=node.value[index]
+-			if key_node.tag==_k:
+-				if merge:
+-					if self.allow_duplicate_keys:del node.value[index];index+=1;continue
+-					args=[_E,node.start_mark,_Y.format(key_node.value),key_node.start_mark,_l,_m]
+-					if self.allow_duplicate_keys is _A:warnings.warn(DuplicateKeyFutureWarning(*args))
+-					else:raise DuplicateKeyError(*args)
+-				del node.value[index]
+-				if isinstance(value_node,MappingNode):self.flatten_mapping(value_node);merge.extend(value_node.value)
+-				elif isinstance(value_node,SequenceNode):
+-					submerge=[]
+-					for subnode in value_node.value:
+-						if not isinstance(subnode,MappingNode):raise ConstructorError(_E,node.start_mark,_n%subnode.id,subnode.start_mark)
+-						self.flatten_mapping(subnode);submerge.append(subnode.value)
+-					submerge.reverse()
+-					for value in submerge:merge.extend(value)
+-				else:raise ConstructorError(_E,node.start_mark,_o%value_node.id,value_node.start_mark)
+-			elif key_node.tag==_Z:key_node.tag=_R;index+=1
+-			else:index+=1
+-		if bool(merge):node.merge=merge;node.value=merge+node.value
+-	def construct_mapping(self,node,deep=_B):
+-		if isinstance(node,MappingNode):self.flatten_mapping(node)
+-		return BaseConstructor.construct_mapping(self,node,deep=deep)
+-	def construct_yaml_null(self,node):self.construct_scalar(node);return _A
+-	bool_values={'yes':_C,'no':_B,'y':_C,'n':_B,'true':_C,'false':_B,'on':_C,'off':_B}
+-	def construct_yaml_bool(self,node):value=self.construct_scalar(node);return self.bool_values[value.lower()]
+-	def construct_yaml_int(self,node):
+-		value_s=to_str(self.construct_scalar(node));value_s=value_s.replace(_D,'');sign=+1
+-		if value_s[0]==_J:sign=-1
+-		if value_s[0]in _M:value_s=value_s[1:]
+-		if value_s==_F:return 0
+-		elif value_s.startswith('0b'):return sign*int(value_s[2:],2)
+-		elif value_s.startswith('0x'):return sign*int(value_s[2:],16)
+-		elif value_s.startswith('0o'):return sign*int(value_s[2:],8)
+-		elif self.resolver.processing_version==(1,1)and value_s[0]==_F:return sign*int(value_s,8)
+-		elif self.resolver.processing_version==(1,1)and _G in value_s:
+-			digits=[int(part)for part in value_s.split(_G)];digits.reverse();base=1;value=0
+-			for digit in digits:value+=digit*base;base*=60
+-			return sign*value
+-		else:return sign*int(value_s)
+-	inf_value=1e+300
+-	while inf_value!=inf_value*inf_value:inf_value*=inf_value
+-	nan_value=-inf_value/inf_value
+-	def construct_yaml_float(self,node):
+-		value_so=to_str(self.construct_scalar(node));value_s=value_so.replace(_D,'').lower();sign=+1
+-		if value_s[0]==_J:sign=-1
+-		if value_s[0]in _M:value_s=value_s[1:]
+-		if value_s==_p:return sign*self.inf_value
+-		elif value_s==_q:return self.nan_value
+-		elif self.resolver.processing_version!=(1,2)and _G in value_s:
+-			digits=[float(part)for part in value_s.split(_G)];digits.reverse();base=1;value=0.0
+-			for digit in digits:value+=digit*base;base*=60
+-			return sign*value
+-		else:
+-			if self.resolver.processing_version!=(1,2)and _N in value_s:
+-				mantissa,exponent=value_s.split(_N)
+-				if _H not in mantissa:warnings.warn(MantissaNoDotYAML1_1Warning(node,value_so))
+-			return sign*float(value_s)
+-	if PY3:
+-		def construct_yaml_binary(self,node):
+-			try:value=self.construct_scalar(node).encode(_S)
+-			except UnicodeEncodeError as exc:raise ConstructorError(_A,_A,_r%exc,node.start_mark)
+-			try:
+-				if hasattr(base64,_s):return base64.decodebytes(value)
+-				else:return base64.decodestring(value)
+-			except binascii.Error as exc:raise ConstructorError(_A,_A,_a%exc,node.start_mark)
+-	else:
+-		def construct_yaml_binary(self,node):
+-			value=self.construct_scalar(node)
+-			try:return to_str(value).decode('base64')
+-			except (binascii.Error,UnicodeEncodeError)as exc:raise ConstructorError(_A,_A,_a%exc,node.start_mark)
+-	timestamp_regexp=RegExp('^(?P<year>[0-9][0-9][0-9][0-9])\n          -(?P<month>[0-9][0-9]?)\n          -(?P<day>[0-9][0-9]?)\n          (?:((?P<t>[Tt])|[ \\t]+)   # explictly not retaining extra spaces\n          (?P<hour>[0-9][0-9]?)\n          :(?P<minute>[0-9][0-9])\n          :(?P<second>[0-9][0-9])\n          (?:\\.(?P<fraction>[0-9]*))?\n          (?:[ \\t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)\n          (?::(?P<tz_minute>[0-9][0-9]))?))?)?$',re.X)
+-	def construct_yaml_timestamp(self,node,values=_A):
+-		if values is _A:
+-			try:match=self.timestamp_regexp.match(node.value)
+-			except TypeError:match=_A
+-			if match is _A:raise ConstructorError(_A,_A,_t.format(node.value),node.start_mark)
+-			values=match.groupdict()
+-		year=int(values[_u]);month=int(values[_v]);day=int(values[_w])
+-		if not values[_T]:return datetime.date(year,month,day)
+-		hour=int(values[_T]);minute=int(values[_x]);second=int(values[_y]);fraction=0
+-		if values[_I]:
+-			fraction_s=values[_I][:6]
+-			while len(fraction_s)<6:fraction_s+=_F
+-			fraction=int(fraction_s)
+-			if len(values[_I])>6 and int(values[_I][6])>4:fraction+=1
+-		delta=_A
+-		if values[_K]:
+-			tz_hour=int(values[_U]);minutes=values[_O];tz_minute=int(minutes)if minutes else 0;delta=datetime.timedelta(hours=tz_hour,minutes=tz_minute)
+-			if values[_K]==_J:delta=-delta
+-		data=datetime.datetime(year,month,day,hour,minute,second,fraction)
+-		if delta:data-=delta
+-		return data
+-	def construct_yaml_omap(self,node):
+-		omap=ordereddict();yield omap
+-		if not isinstance(node,SequenceNode):raise ConstructorError(_L,node.start_mark,_b%node.id,node.start_mark)
+-		for subnode in node.value:
+-			if not isinstance(subnode,MappingNode):raise ConstructorError(_L,node.start_mark,_c%subnode.id,subnode.start_mark)
+-			if len(subnode.value)!=1:raise ConstructorError(_L,node.start_mark,_d%len(subnode.value),subnode.start_mark)
+-			key_node,value_node=subnode.value[0];key=self.construct_object(key_node);assert key not in omap;value=self.construct_object(value_node);omap[key]=value
+-	def construct_yaml_pairs(self,node):
+-		A='while constructing pairs';pairs=[];yield pairs
+-		if not isinstance(node,SequenceNode):raise ConstructorError(A,node.start_mark,_b%node.id,node.start_mark)
+-		for subnode in node.value:
+-			if not isinstance(subnode,MappingNode):raise ConstructorError(A,node.start_mark,_c%subnode.id,subnode.start_mark)
+-			if len(subnode.value)!=1:raise ConstructorError(A,node.start_mark,_d%len(subnode.value),subnode.start_mark)
+-			key_node,value_node=subnode.value[0];key=self.construct_object(key_node);value=self.construct_object(value_node);pairs.append((key,value))
+-	def construct_yaml_set(self,node):data=set();yield data;value=self.construct_mapping(node);data.update(value)
+-	def construct_yaml_str(self,node):
+-		value=self.construct_scalar(node)
+-		if PY3:return value
+-		try:return value.encode(_S)
+-		except UnicodeEncodeError:return value
+-	def construct_yaml_seq(self,node):data=self.yaml_base_list_type();yield data;data.extend(self.construct_sequence(node))
+-	def construct_yaml_map(self,node):data=self.yaml_base_dict_type();yield data;value=self.construct_mapping(node);data.update(value)
+-	def construct_yaml_object(self,node,cls):
+-		data=cls.__new__(cls);yield data
+-		if hasattr(data,_V):state=self.construct_mapping(node,deep=_C);data.__setstate__(state)
+-		else:state=self.construct_mapping(node);data.__dict__.update(state)
+-	def construct_undefined(self,node):raise ConstructorError(_A,_A,_z%utf8(node.tag),node.start_mark)
+-SafeConstructor.add_constructor(_A0,SafeConstructor.construct_yaml_null)
+-SafeConstructor.add_constructor(_A1,SafeConstructor.construct_yaml_bool)
+-SafeConstructor.add_constructor(_A2,SafeConstructor.construct_yaml_int)
+-SafeConstructor.add_constructor(_A3,SafeConstructor.construct_yaml_float)
+-SafeConstructor.add_constructor(_A4,SafeConstructor.construct_yaml_binary)
+-SafeConstructor.add_constructor(_A5,SafeConstructor.construct_yaml_timestamp)
+-SafeConstructor.add_constructor(_A6,SafeConstructor.construct_yaml_omap)
+-SafeConstructor.add_constructor(_A7,SafeConstructor.construct_yaml_pairs)
+-SafeConstructor.add_constructor(_A8,SafeConstructor.construct_yaml_set)
+-SafeConstructor.add_constructor(_R,SafeConstructor.construct_yaml_str)
+-SafeConstructor.add_constructor(_A9,SafeConstructor.construct_yaml_seq)
+-SafeConstructor.add_constructor(_AA,SafeConstructor.construct_yaml_map)
+-SafeConstructor.add_constructor(_A,SafeConstructor.construct_undefined)
+-if PY2:
+-	class classobj:0
+-class Constructor(SafeConstructor):
+-	def construct_python_str(self,node):return utf8(self.construct_scalar(node))
+-	def construct_python_unicode(self,node):return self.construct_scalar(node)
+-	if PY3:
+-		def construct_python_bytes(self,node):
+-			try:value=self.construct_scalar(node).encode(_S)
+-			except UnicodeEncodeError as exc:raise ConstructorError(_A,_A,_r%exc,node.start_mark)
+-			try:
+-				if hasattr(base64,_s):return base64.decodebytes(value)
+-				else:return base64.decodestring(value)
+-			except binascii.Error as exc:raise ConstructorError(_A,_A,_a%exc,node.start_mark)
+-	def construct_python_long(self,node):
+-		val=self.construct_yaml_int(node)
+-		if PY3:return val
+-		return int(val)
+-	def construct_python_complex(self,node):return complex(self.construct_scalar(node))
+-	def construct_python_tuple(self,node):return tuple(self.construct_sequence(node))
+-	def find_python_module(self,name,mark):
+-		if not name:raise ConstructorError(_e,mark,_AB,mark)
+-		try:__import__(name)
+-		except ImportError as exc:raise ConstructorError(_e,mark,_AC%(utf8(name),exc),mark)
+-		return sys.modules[name]
+-	def find_python_name(self,name,mark):
+-		A='while constructing a Python object'
+-		if not name:raise ConstructorError(A,mark,_AB,mark)
+-		if _H in name:
+-			lname=name.split(_H);lmodule_name=lname;lobject_name=[]
+-			while len(lmodule_name)>1:
+-				lobject_name.insert(0,lmodule_name.pop());module_name=_H.join(lmodule_name)
+-				try:__import__(module_name);break
+-				except ImportError:continue
+-		else:module_name=builtins_module;lobject_name=[name]
+-		try:__import__(module_name)
+-		except ImportError as exc:raise ConstructorError(A,mark,_AC%(utf8(module_name),exc),mark)
+-		module=sys.modules[module_name];object_name=_H.join(lobject_name);obj=module
+-		while lobject_name:
+-			if not hasattr(obj,lobject_name[0]):raise ConstructorError(A,mark,'cannot find %r in the module %r'%(utf8(object_name),module.__name__),mark)
+-			obj=getattr(obj,lobject_name.pop(0))
+-		return obj
+-	def construct_python_name(self,suffix,node):
+-		value=self.construct_scalar(node)
+-		if value:raise ConstructorError('while constructing a Python name',node.start_mark,_AD%utf8(value),node.start_mark)
+-		return self.find_python_name(suffix,node.start_mark)
+-	def construct_python_module(self,suffix,node):
+-		value=self.construct_scalar(node)
+-		if value:raise ConstructorError(_e,node.start_mark,_AD%utf8(value),node.start_mark)
+-		return self.find_python_module(suffix,node.start_mark)
+-	def make_python_instance(self,suffix,node,args=_A,kwds=_A,newobj=_B):
+-		if not args:args=[]
+-		if not kwds:kwds={}
+-		cls=self.find_python_name(suffix,node.start_mark)
+-		if PY3:
+-			if newobj and isinstance(cls,type):return cls.__new__(cls,*args,**kwds)
+-			else:return cls(*args,**kwds)
+-		elif newobj and isinstance(cls,type(classobj))and not args and not kwds:instance=classobj();instance.__class__=cls;return instance
+-		elif newobj and isinstance(cls,type):return cls.__new__(cls,*args,**kwds)
+-		else:return cls(*args,**kwds)
+-	def set_python_instance_state(self,instance,state):
+-		if hasattr(instance,_V):instance.__setstate__(state)
+-		else:
+-			slotstate={}
+-			if isinstance(state,tuple)and len(state)==2:state,slotstate=state
+-			if hasattr(instance,'__dict__'):instance.__dict__.update(state)
+-			elif state:slotstate.update(state)
+-			for (key,value) in slotstate.items():setattr(instance,key,value)
+-	def construct_python_object(self,suffix,node):instance=self.make_python_instance(suffix,node,newobj=_C);self.recursive_objects[node]=instance;yield instance;deep=hasattr(instance,_V);state=self.construct_mapping(node,deep=deep);self.set_python_instance_state(instance,state)
+-	def construct_python_object_apply(self,suffix,node,newobj=_B):
+-		if isinstance(node,SequenceNode):args=self.construct_sequence(node,deep=_C);kwds={};state={};listitems=[];dictitems={}
+-		else:value=self.construct_mapping(node,deep=_C);args=value.get('args',[]);kwds=value.get('kwds',{});state=value.get('state',{});listitems=value.get('listitems',[]);dictitems=value.get('dictitems',{})
+-		instance=self.make_python_instance(suffix,node,args,kwds,newobj)
+-		if bool(state):self.set_python_instance_state(instance,state)
+-		if bool(listitems):instance.extend(listitems)
+-		if bool(dictitems):
+-			for key in dictitems:instance[key]=dictitems[key]
+-		return instance
+-	def construct_python_object_new(self,suffix,node):return self.construct_python_object_apply(suffix,node,newobj=_C)
+-Constructor.add_constructor('tag:yaml.org,2002:python/none',Constructor.construct_yaml_null)
+-Constructor.add_constructor('tag:yaml.org,2002:python/bool',Constructor.construct_yaml_bool)
+-Constructor.add_constructor('tag:yaml.org,2002:python/str',Constructor.construct_python_str)
+-Constructor.add_constructor('tag:yaml.org,2002:python/unicode',Constructor.construct_python_unicode)
+-if PY3:Constructor.add_constructor('tag:yaml.org,2002:python/bytes',Constructor.construct_python_bytes)
+-Constructor.add_constructor('tag:yaml.org,2002:python/int',Constructor.construct_yaml_int)
+-Constructor.add_constructor('tag:yaml.org,2002:python/long',Constructor.construct_python_long)
+-Constructor.add_constructor('tag:yaml.org,2002:python/float',Constructor.construct_yaml_float)
+-Constructor.add_constructor('tag:yaml.org,2002:python/complex',Constructor.construct_python_complex)
+-Constructor.add_constructor('tag:yaml.org,2002:python/list',Constructor.construct_yaml_seq)
+-Constructor.add_constructor('tag:yaml.org,2002:python/tuple',Constructor.construct_python_tuple)
+-Constructor.add_constructor('tag:yaml.org,2002:python/dict',Constructor.construct_yaml_map)
+-Constructor.add_multi_constructor('tag:yaml.org,2002:python/name:',Constructor.construct_python_name)
+-Constructor.add_multi_constructor('tag:yaml.org,2002:python/module:',Constructor.construct_python_module)
+-Constructor.add_multi_constructor('tag:yaml.org,2002:python/object:',Constructor.construct_python_object)
+-Constructor.add_multi_constructor('tag:yaml.org,2002:python/object/apply:',Constructor.construct_python_object_apply)
+-Constructor.add_multi_constructor('tag:yaml.org,2002:python/object/new:',Constructor.construct_python_object_new)
+-class RoundTripConstructor(SafeConstructor):
+-	def construct_scalar(self,node):
+-		A='\x07'
+-		if not isinstance(node,ScalarNode):raise ConstructorError(_A,_A,_g%node.id,node.start_mark)
+-		if node.style=='|'and isinstance(node.value,text_type):
+-			lss=LiteralScalarString(node.value,anchor=node.anchor)
+-			if node.comment and node.comment[1]:lss.comment=node.comment[1][0]
+-			return lss
+-		if node.style=='>'and isinstance(node.value,text_type):
+-			fold_positions=[];idx=-1
+-			while _C:
+-				idx=node.value.find(A,idx+1)
+-				if idx<0:break
+-				fold_positions.append(idx-len(fold_positions))
+-			fss=FoldedScalarString(node.value.replace(A,''),anchor=node.anchor)
+-			if node.comment and node.comment[1]:fss.comment=node.comment[1][0]
+-			if fold_positions:fss.fold_pos=fold_positions
+-			return fss
+-		elif bool(self._preserve_quotes)and isinstance(node.value,text_type):
+-			if node.style=="'":return SingleQuotedScalarString(node.value,anchor=node.anchor)
+-			if node.style=='"':return DoubleQuotedScalarString(node.value,anchor=node.anchor)
+-		if node.anchor:return PlainScalarString(node.value,anchor=node.anchor)
+-		return node.value
+-	def construct_yaml_int(self,node):
+-		width=_A;value_su=to_str(self.construct_scalar(node))
+-		try:sx=value_su.rstrip(_D);underscore=[len(sx)-sx.rindex(_D)-1,_B,_B]
+-		except ValueError:underscore=_A
+-		except IndexError:underscore=_A
+-		value_s=value_su.replace(_D,'');sign=+1
+-		if value_s[0]==_J:sign=-1
+-		if value_s[0]in _M:value_s=value_s[1:]
+-		if value_s==_F:return 0
+-		elif value_s.startswith('0b'):
+-			if self.resolver.processing_version>(1,1)and value_s[2]==_F:width=len(value_s[2:])
+-			if underscore is not _A:underscore[1]=value_su[2]==_D;underscore[2]=len(value_su[2:])>1 and value_su[-1]==_D
+-			return BinaryInt(sign*int(value_s[2:],2),width=width,underscore=underscore,anchor=node.anchor)
+-		elif value_s.startswith('0x'):
+-			if self.resolver.processing_version>(1,1)and value_s[2]==_F:width=len(value_s[2:])
+-			hex_fun=HexInt
+-			for ch in value_s[2:]:
+-				if ch in'ABCDEF':hex_fun=HexCapsInt;break
+-				if ch in'abcdef':break
+-			if underscore is not _A:underscore[1]=value_su[2]==_D;underscore[2]=len(value_su[2:])>1 and value_su[-1]==_D
+-			return hex_fun(sign*int(value_s[2:],16),width=width,underscore=underscore,anchor=node.anchor)
+-		elif value_s.startswith('0o'):
+-			if self.resolver.processing_version>(1,1)and value_s[2]==_F:width=len(value_s[2:])
+-			if underscore is not _A:underscore[1]=value_su[2]==_D;underscore[2]=len(value_su[2:])>1 and value_su[-1]==_D
+-			return OctalInt(sign*int(value_s[2:],8),width=width,underscore=underscore,anchor=node.anchor)
+-		elif self.resolver.processing_version!=(1,2)and value_s[0]==_F:return sign*int(value_s,8)
+-		elif self.resolver.processing_version!=(1,2)and _G in value_s:
+-			digits=[int(part)for part in value_s.split(_G)];digits.reverse();base=1;value=0
+-			for digit in digits:value+=digit*base;base*=60
+-			return sign*value
+-		elif self.resolver.processing_version>(1,1)and value_s[0]==_F:
+-			if underscore is not _A:underscore[2]=len(value_su)>1 and value_su[-1]==_D
+-			return ScalarInt(sign*int(value_s),width=len(value_s),underscore=underscore)
+-		elif underscore:underscore[2]=len(value_su)>1 and value_su[-1]==_D;return ScalarInt(sign*int(value_s),width=_A,underscore=underscore,anchor=node.anchor)
+-		elif node.anchor:return ScalarInt(sign*int(value_s),width=_A,anchor=node.anchor)
+-		else:return sign*int(value_s)
+-	def construct_yaml_float(self,node):
+-		A='E'
+-		def leading_zeros(v):
+-			lead0=0;idx=0
+-			while idx<len(v)and v[idx]in'0.':
+-				if v[idx]==_F:lead0+=1
+-				idx+=1
+-			return lead0
+-		m_sign=_B;value_so=to_str(self.construct_scalar(node));value_s=value_so.replace(_D,'').lower();sign=+1
+-		if value_s[0]==_J:sign=-1
+-		if value_s[0]in _M:m_sign=value_s[0];value_s=value_s[1:]
+-		if value_s==_p:return sign*self.inf_value
+-		if value_s==_q:return self.nan_value
+-		if self.resolver.processing_version!=(1,2)and _G in value_s:
+-			digits=[float(part)for part in value_s.split(_G)];digits.reverse();base=1;value=0.0
+-			for digit in digits:value+=digit*base;base*=60
+-			return sign*value
+-		if _N in value_s:
+-			try:mantissa,exponent=value_so.split(_N);exp=_N
+-			except ValueError:mantissa,exponent=value_so.split(A);exp=A
+-			if self.resolver.processing_version!=(1,2):
+-				if _H not in mantissa:warnings.warn(MantissaNoDotYAML1_1Warning(node,value_so))
+-			lead0=leading_zeros(mantissa);width=len(mantissa);prec=mantissa.find(_H)
+-			if m_sign:width-=1
+-			e_width=len(exponent);e_sign=exponent[0]in _M;return ScalarFloat(sign*float(value_s),width=width,prec=prec,m_sign=m_sign,m_lead0=lead0,exp=exp,e_width=e_width,e_sign=e_sign,anchor=node.anchor)
+-		width=len(value_so);prec=value_so.index(_H);lead0=leading_zeros(value_so);return ScalarFloat(sign*float(value_s),width=width,prec=prec,m_sign=m_sign,m_lead0=lead0,anchor=node.anchor)
+-	def construct_yaml_str(self,node):
+-		value=self.construct_scalar(node)
+-		if isinstance(value,ScalarString):return value
+-		if PY3:return value
+-		try:return value.encode(_S)
+-		except AttributeError:return value
+-		except UnicodeEncodeError:return value
+-	def construct_rt_sequence(self,node,seqtyp,deep=_B):
+-		if not isinstance(node,SequenceNode):raise ConstructorError(_A,_A,_h%node.id,node.start_mark)
+-		ret_val=[]
+-		if node.comment:
+-			seqtyp._yaml_add_comment(node.comment[:2])
+-			if len(node.comment)>2:seqtyp.yaml_end_comment_extend(node.comment[2],clear=_C)
+-		if node.anchor:
+-			from dynaconf.vendor.ruamel.yaml.serializer import templated_id
+-			if not templated_id(node.anchor):seqtyp.yaml_set_anchor(node.anchor)
+-		for (idx,child) in enumerate(node.value):
+-			if child.comment:seqtyp._yaml_add_comment(child.comment,key=idx);child.comment=_A
+-			ret_val.append(self.construct_object(child,deep=deep));seqtyp._yaml_set_idx_line_col(idx,[child.start_mark.line,child.start_mark.column])
+-		return ret_val
+-	def flatten_mapping(self,node):
+-		def constructed(value_node):
+-			if value_node in self.constructed_objects:value=self.constructed_objects[value_node]
+-			else:value=self.construct_object(value_node,deep=_B)
+-			return value
+-		merge_map_list=[];index=0
+-		while index<len(node.value):
+-			key_node,value_node=node.value[index]
+-			if key_node.tag==_k:
+-				if merge_map_list:
+-					if self.allow_duplicate_keys:del node.value[index];index+=1;continue
+-					args=[_E,node.start_mark,_Y.format(key_node.value),key_node.start_mark,_l,_m]
+-					if self.allow_duplicate_keys is _A:warnings.warn(DuplicateKeyFutureWarning(*args))
+-					else:raise DuplicateKeyError(*args)
+-				del node.value[index]
+-				if isinstance(value_node,MappingNode):merge_map_list.append((index,constructed(value_node)))
+-				elif isinstance(value_node,SequenceNode):
+-					for subnode in value_node.value:
+-						if not isinstance(subnode,MappingNode):raise ConstructorError(_E,node.start_mark,_n%subnode.id,subnode.start_mark)
+-						merge_map_list.append((index,constructed(subnode)))
+-				else:raise ConstructorError(_E,node.start_mark,_o%value_node.id,value_node.start_mark)
+-			elif key_node.tag==_Z:key_node.tag=_R;index+=1
+-			else:index+=1
+-		return merge_map_list
+-	def _sentinel(self):0
+-	def construct_mapping(self,node,maptyp,deep=_B):
+-		if not isinstance(node,MappingNode):raise ConstructorError(_A,_A,_P%node.id,node.start_mark)
+-		merge_map=self.flatten_mapping(node)
+-		if node.comment:
+-			maptyp._yaml_add_comment(node.comment[:2])
+-			if len(node.comment)>2:maptyp.yaml_end_comment_extend(node.comment[2],clear=_C)
+-		if node.anchor:
+-			from dynaconf.vendor.ruamel.yaml.serializer import templated_id
+-			if not templated_id(node.anchor):maptyp.yaml_set_anchor(node.anchor)
+-		last_key,last_value=_A,self._sentinel
+-		for (key_node,value_node) in node.value:
+-			key=self.construct_object(key_node,deep=_C)
+-			if not isinstance(key,Hashable):
+-				if isinstance(key,MutableSequence):
+-					key_s=CommentedKeySeq(key)
+-					if key_node.flow_style is _C:key_s.fa.set_flow_style()
+-					elif key_node.flow_style is _B:key_s.fa.set_block_style()
+-					key=key_s
+-				elif isinstance(key,MutableMapping):
+-					key_m=CommentedKeyMap(key)
+-					if key_node.flow_style is _C:key_m.fa.set_flow_style()
+-					elif key_node.flow_style is _B:key_m.fa.set_block_style()
+-					key=key_m
+-			if PY2:
+-				try:hash(key)
+-				except TypeError as exc:raise ConstructorError(_E,node.start_mark,_W%exc,key_node.start_mark)
+-			elif not isinstance(key,Hashable):raise ConstructorError(_E,node.start_mark,_X,key_node.start_mark)
+-			value=self.construct_object(value_node,deep=deep)
+-			if self.check_mapping_key(node,key_node,maptyp,key,value):
+-				if key_node.comment and len(key_node.comment)>4 and key_node.comment[4]:
+-					if last_value is _A:key_node.comment[0]=key_node.comment.pop(4);maptyp._yaml_add_comment(key_node.comment,value=last_key)
+-					else:key_node.comment[2]=key_node.comment.pop(4);maptyp._yaml_add_comment(key_node.comment,key=key)
+-					key_node.comment=_A
+-				if key_node.comment:maptyp._yaml_add_comment(key_node.comment,key=key)
+-				if value_node.comment:maptyp._yaml_add_comment(value_node.comment,value=key)
+-				maptyp._yaml_set_kv_line_col(key,[key_node.start_mark.line,key_node.start_mark.column,value_node.start_mark.line,value_node.start_mark.column]);maptyp[key]=value;last_key,last_value=key,value
+-		if merge_map:maptyp.add_yaml_merge(merge_map)
+-	def construct_setting(self,node,typ,deep=_B):
+-		if not isinstance(node,MappingNode):raise ConstructorError(_A,_A,_P%node.id,node.start_mark)
+-		if node.comment:
+-			typ._yaml_add_comment(node.comment[:2])
+-			if len(node.comment)>2:typ.yaml_end_comment_extend(node.comment[2],clear=_C)
+-		if node.anchor:
+-			from dynaconf.vendor.ruamel.yaml.serializer import templated_id
+-			if not templated_id(node.anchor):typ.yaml_set_anchor(node.anchor)
+-		for (key_node,value_node) in node.value:
+-			key=self.construct_object(key_node,deep=_C)
+-			if not isinstance(key,Hashable):
+-				if isinstance(key,list):key=tuple(key)
+-			if PY2:
+-				try:hash(key)
+-				except TypeError as exc:raise ConstructorError(_E,node.start_mark,_W%exc,key_node.start_mark)
+-			elif not isinstance(key,Hashable):raise ConstructorError(_E,node.start_mark,_X,key_node.start_mark)
+-			value=self.construct_object(value_node,deep=deep);self.check_set_key(node,key_node,typ,key)
+-			if key_node.comment:typ._yaml_add_comment(key_node.comment,key=key)
+-			if value_node.comment:typ._yaml_add_comment(value_node.comment,value=key)
+-			typ.add(key)
+-	def construct_yaml_seq(self,node):
+-		data=CommentedSeq();data._yaml_set_line_col(node.start_mark.line,node.start_mark.column)
+-		if node.comment:data._yaml_add_comment(node.comment)
+-		yield data;data.extend(self.construct_rt_sequence(node,data));self.set_collection_style(data,node)
+-	def construct_yaml_map(self,node):data=CommentedMap();data._yaml_set_line_col(node.start_mark.line,node.start_mark.column);yield data;self.construct_mapping(node,data,deep=_C);self.set_collection_style(data,node)
+-	def set_collection_style(self,data,node):
+-		if len(data)==0:return
+-		if node.flow_style is _C:data.fa.set_flow_style()
+-		elif node.flow_style is _B:data.fa.set_block_style()
+-	def construct_yaml_object(self,node,cls):
+-		data=cls.__new__(cls);yield data
+-		if hasattr(data,_V):state=SafeConstructor.construct_mapping(self,node,deep=_C);data.__setstate__(state)
+-		else:state=SafeConstructor.construct_mapping(self,node);data.__dict__.update(state)
+-	def construct_yaml_omap(self,node):
+-		omap=CommentedOrderedMap();omap._yaml_set_line_col(node.start_mark.line,node.start_mark.column)
+-		if node.flow_style is _C:omap.fa.set_flow_style()
+-		elif node.flow_style is _B:omap.fa.set_block_style()
+-		yield omap
+-		if node.comment:
+-			omap._yaml_add_comment(node.comment[:2])
+-			if len(node.comment)>2:omap.yaml_end_comment_extend(node.comment[2],clear=_C)
+-		if not isinstance(node,SequenceNode):raise ConstructorError(_L,node.start_mark,_b%node.id,node.start_mark)
+-		for subnode in node.value:
+-			if not isinstance(subnode,MappingNode):raise ConstructorError(_L,node.start_mark,_c%subnode.id,subnode.start_mark)
+-			if len(subnode.value)!=1:raise ConstructorError(_L,node.start_mark,_d%len(subnode.value),subnode.start_mark)
+-			key_node,value_node=subnode.value[0];key=self.construct_object(key_node);assert key not in omap;value=self.construct_object(value_node)
+-			if key_node.comment:omap._yaml_add_comment(key_node.comment,key=key)
+-			if subnode.comment:omap._yaml_add_comment(subnode.comment,key=key)
+-			if value_node.comment:omap._yaml_add_comment(value_node.comment,value=key)
+-			omap[key]=value
+-	def construct_yaml_set(self,node):data=CommentedSet();data._yaml_set_line_col(node.start_mark.line,node.start_mark.column);yield data;self.construct_setting(node,data)
+-	def construct_undefined(self,node):
+-		try:
+-			if isinstance(node,MappingNode):
+-				data=CommentedMap();data._yaml_set_line_col(node.start_mark.line,node.start_mark.column)
+-				if node.flow_style is _C:data.fa.set_flow_style()
+-				elif node.flow_style is _B:data.fa.set_block_style()
+-				data.yaml_set_tag(node.tag);yield data
+-				if node.anchor:data.yaml_set_anchor(node.anchor)
+-				self.construct_mapping(node,data);return
+-			elif isinstance(node,ScalarNode):
+-				data2=TaggedScalar();data2.value=self.construct_scalar(node);data2.style=node.style;data2.yaml_set_tag(node.tag);yield data2
+-				if node.anchor:data2.yaml_set_anchor(node.anchor,always_dump=_C)
+-				return
+-			elif isinstance(node,SequenceNode):
+-				data3=CommentedSeq();data3._yaml_set_line_col(node.start_mark.line,node.start_mark.column)
+-				if node.flow_style is _C:data3.fa.set_flow_style()
+-				elif node.flow_style is _B:data3.fa.set_block_style()
+-				data3.yaml_set_tag(node.tag);yield data3
+-				if node.anchor:data3.yaml_set_anchor(node.anchor)
+-				data3.extend(self.construct_sequence(node));return
+-		except:pass
+-		raise ConstructorError(_A,_A,_z%utf8(node.tag),node.start_mark)
+-	def construct_yaml_timestamp(self,node,values=_A):
+-		B='t';A='tz'
+-		try:match=self.timestamp_regexp.match(node.value)
+-		except TypeError:match=_A
+-		if match is _A:raise ConstructorError(_A,_A,_t.format(node.value),node.start_mark)
+-		values=match.groupdict()
+-		if not values[_T]:return SafeConstructor.construct_yaml_timestamp(self,node,values)
+-		for part in [B,_K,_U,_O]:
+-			if values[part]:break
+-		else:return SafeConstructor.construct_yaml_timestamp(self,node,values)
+-		year=int(values[_u]);month=int(values[_v]);day=int(values[_w]);hour=int(values[_T]);minute=int(values[_x]);second=int(values[_y]);fraction=0
+-		if values[_I]:
+-			fraction_s=values[_I][:6]
+-			while len(fraction_s)<6:fraction_s+=_F
+-			fraction=int(fraction_s)
+-			if len(values[_I])>6 and int(values[_I][6])>4:fraction+=1
+-		delta=_A
+-		if values[_K]:
+-			tz_hour=int(values[_U]);minutes=values[_O];tz_minute=int(minutes)if minutes else 0;delta=datetime.timedelta(hours=tz_hour,minutes=tz_minute)
+-			if values[_K]==_J:delta=-delta
+-		if delta:
+-			dt=datetime.datetime(year,month,day,hour,minute);dt-=delta;data=TimeStamp(dt.year,dt.month,dt.day,dt.hour,dt.minute,second,fraction);data._yaml['delta']=delta;tz=values[_K]+values[_U]
+-			if values[_O]:tz+=_G+values[_O]
+-			data._yaml[A]=tz
+-		else:
+-			data=TimeStamp(year,month,day,hour,minute,second,fraction)
+-			if values[A]:data._yaml[A]=values[A]
+-		if values[B]:data._yaml[B]=_C
+-		return data
+-	def construct_yaml_bool(self,node):
+-		b=SafeConstructor.construct_yaml_bool(self,node)
+-		if node.anchor:return ScalarBoolean(b,anchor=node.anchor)
+-		return b
+-RoundTripConstructor.add_constructor(_A0,RoundTripConstructor.construct_yaml_null)
+-RoundTripConstructor.add_constructor(_A1,RoundTripConstructor.construct_yaml_bool)
+-RoundTripConstructor.add_constructor(_A2,RoundTripConstructor.construct_yaml_int)
+-RoundTripConstructor.add_constructor(_A3,RoundTripConstructor.construct_yaml_float)
+-RoundTripConstructor.add_constructor(_A4,RoundTripConstructor.construct_yaml_binary)
+-RoundTripConstructor.add_constructor(_A5,RoundTripConstructor.construct_yaml_timestamp)
+-RoundTripConstructor.add_constructor(_A6,RoundTripConstructor.construct_yaml_omap)
+-RoundTripConstructor.add_constructor(_A7,RoundTripConstructor.construct_yaml_pairs)
+-RoundTripConstructor.add_constructor(_A8,RoundTripConstructor.construct_yaml_set)
+-RoundTripConstructor.add_constructor(_R,RoundTripConstructor.construct_yaml_str)
+-RoundTripConstructor.add_constructor(_A9,RoundTripConstructor.construct_yaml_seq)
+-RoundTripConstructor.add_constructor(_AA,RoundTripConstructor.construct_yaml_map)
+-RoundTripConstructor.add_constructor(_A,RoundTripConstructor.construct_undefined)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/cyaml.py b/dynaconf/vendor/ruamel/yaml/cyaml.py
+deleted file mode 100644
+index 73ee79d..0000000
+--- a/dynaconf/vendor/ruamel/yaml/cyaml.py
++++ /dev/null
+@@ -1,20 +0,0 @@
+-from __future__ import absolute_import
+-_A=None
+-from _ruamel_yaml import CParser,CEmitter
+-from .constructor import Constructor,BaseConstructor,SafeConstructor
+-from .representer import Representer,SafeRepresenter,BaseRepresenter
+-from .resolver import Resolver,BaseResolver
+-if False:from typing import Any,Union,Optional;from .compat import StreamTextType,StreamType,VersionType
+-__all__=['CBaseLoader','CSafeLoader','CLoader','CBaseDumper','CSafeDumper','CDumper']
+-class CBaseLoader(CParser,BaseConstructor,BaseResolver):
+-	def __init__(A,stream,version=_A,preserve_quotes=_A):CParser.__init__(A,stream);A._parser=A._composer=A;BaseConstructor.__init__(A,loader=A);BaseResolver.__init__(A,loadumper=A)
+-class CSafeLoader(CParser,SafeConstructor,Resolver):
+-	def __init__(A,stream,version=_A,preserve_quotes=_A):CParser.__init__(A,stream);A._parser=A._composer=A;SafeConstructor.__init__(A,loader=A);Resolver.__init__(A,loadumper=A)
+-class CLoader(CParser,Constructor,Resolver):
+-	def __init__(A,stream,version=_A,preserve_quotes=_A):CParser.__init__(A,stream);A._parser=A._composer=A;Constructor.__init__(A,loader=A);Resolver.__init__(A,loadumper=A)
+-class CBaseDumper(CEmitter,BaseRepresenter,BaseResolver):
+-	def __init__(A,stream,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=_A,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):CEmitter.__init__(A,stream,canonical=canonical,indent=indent,width=width,encoding=encoding,allow_unicode=allow_unicode,line_break=line_break,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags);A._emitter=A._serializer=A._representer=A;BaseRepresenter.__init__(A,default_style=default_style,default_flow_style=default_flow_style,dumper=A);BaseResolver.__init__(A,loadumper=A)
+-class CSafeDumper(CEmitter,SafeRepresenter,Resolver):
+-	def __init__(A,stream,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=_A,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):A._emitter=A._serializer=A._representer=A;CEmitter.__init__(A,stream,canonical=canonical,indent=indent,width=width,encoding=encoding,allow_unicode=allow_unicode,line_break=line_break,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags);A._emitter=A._serializer=A._representer=A;SafeRepresenter.__init__(A,default_style=default_style,default_flow_style=default_flow_style);Resolver.__init__(A)
+-class CDumper(CEmitter,Representer,Resolver):
+-	def __init__(A,stream,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=_A,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):CEmitter.__init__(A,stream,canonical=canonical,indent=indent,width=width,encoding=encoding,allow_unicode=allow_unicode,line_break=line_break,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags);A._emitter=A._serializer=A._representer=A;Representer.__init__(A,default_style=default_style,default_flow_style=default_flow_style);Resolver.__init__(A)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/dumper.py b/dynaconf/vendor/ruamel/yaml/dumper.py
+deleted file mode 100644
+index 8b31354..0000000
+--- a/dynaconf/vendor/ruamel/yaml/dumper.py
++++ /dev/null
+@@ -1,16 +0,0 @@
+-from __future__ import absolute_import
+-_A=None
+-from .emitter import Emitter
+-from .serializer import Serializer
+-from .representer import Representer,SafeRepresenter,BaseRepresenter,RoundTripRepresenter
+-from .resolver import Resolver,BaseResolver,VersionedResolver
+-if False:from typing import Any,Dict,List,Union,Optional;from .compat import StreamType,VersionType
+-__all__=['BaseDumper','SafeDumper','Dumper','RoundTripDumper']
+-class BaseDumper(Emitter,Serializer,BaseRepresenter,BaseResolver):
+-	def __init__(A,stream,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=_A,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):Emitter.__init__(A,stream,canonical=canonical,indent=indent,width=width,allow_unicode=allow_unicode,line_break=line_break,block_seq_indent=block_seq_indent,dumper=A);Serializer.__init__(A,encoding=encoding,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags,dumper=A);BaseRepresenter.__init__(A,default_style=default_style,default_flow_style=default_flow_style,dumper=A);BaseResolver.__init__(A,loadumper=A)
+-class SafeDumper(Emitter,Serializer,SafeRepresenter,Resolver):
+-	def __init__(A,stream,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=_A,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):Emitter.__init__(A,stream,canonical=canonical,indent=indent,width=width,allow_unicode=allow_unicode,line_break=line_break,block_seq_indent=block_seq_indent,dumper=A);Serializer.__init__(A,encoding=encoding,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags,dumper=A);SafeRepresenter.__init__(A,default_style=default_style,default_flow_style=default_flow_style,dumper=A);Resolver.__init__(A,loadumper=A)
+-class Dumper(Emitter,Serializer,Representer,Resolver):
+-	def __init__(A,stream,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=_A,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):Emitter.__init__(A,stream,canonical=canonical,indent=indent,width=width,allow_unicode=allow_unicode,line_break=line_break,block_seq_indent=block_seq_indent,dumper=A);Serializer.__init__(A,encoding=encoding,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags,dumper=A);Representer.__init__(A,default_style=default_style,default_flow_style=default_flow_style,dumper=A);Resolver.__init__(A,loadumper=A)
+-class RoundTripDumper(Emitter,Serializer,RoundTripRepresenter,VersionedResolver):
+-	def __init__(A,stream,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=_A,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):Emitter.__init__(A,stream,canonical=canonical,indent=indent,width=width,allow_unicode=allow_unicode,line_break=line_break,block_seq_indent=block_seq_indent,top_level_colon_align=top_level_colon_align,prefix_colon=prefix_colon,dumper=A);Serializer.__init__(A,encoding=encoding,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags,dumper=A);RoundTripRepresenter.__init__(A,default_style=default_style,default_flow_style=default_flow_style,dumper=A);VersionedResolver.__init__(A,loader=A)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/emitter.py b/dynaconf/vendor/ruamel/yaml/emitter.py
+deleted file mode 100644
+index 036530a..0000000
+--- a/dynaconf/vendor/ruamel/yaml/emitter.py
++++ /dev/null
+@@ -1,678 +0,0 @@
+-from __future__ import absolute_import,print_function
+-_a='\x07'
+-_Z='\ufeff'
+-_Y='\ue000'
+-_X='\ud7ff'
+-_W='\x85'
+-_V='%%%02X'
+-_U='version'
+-_T="-;/?:@&=+$,_.~*'()[]"
+-_S='---'
+-_R=' \n\x85\u2028\u2029'
+-_Q='\xa0'
+-_P='a'
+-_O='0'
+-_N=','
+-_M='...'
+-_L='\\'
+-_K='['
+-_J="'"
+-_I='?'
+-_H='"'
+-_G='!'
+-_F='\n\x85\u2028\u2029'
+-_E='\n'
+-_D=' '
+-_C=False
+-_B=None
+-_A=True
+-import sys
+-from .error import YAMLError,YAMLStreamError
+-from .events import *
+-from .compat import utf8,text_type,PY2,nprint,dbg,DBG_EVENT,check_anchorname_char
+-if _C:from typing import Any,Dict,List,Union,Text,Tuple,Optional;from .compat import StreamType
+-__all__=['Emitter','EmitterError']
+-class EmitterError(YAMLError):0
+-class ScalarAnalysis:
+-	def __init__(self,scalar,empty,multiline,allow_flow_plain,allow_block_plain,allow_single_quoted,allow_double_quoted,allow_block):self.scalar=scalar;self.empty=empty;self.multiline=multiline;self.allow_flow_plain=allow_flow_plain;self.allow_block_plain=allow_block_plain;self.allow_single_quoted=allow_single_quoted;self.allow_double_quoted=allow_double_quoted;self.allow_block=allow_block
+-class Indents:
+-	def __init__(self):self.values=[]
+-	def append(self,val,seq):self.values.append((val,seq))
+-	def pop(self):return self.values.pop()[0]
+-	def last_seq(self):
+-		try:return self.values[-2][1]
+-		except IndexError:return _C
+-	def seq_flow_align(self,seq_indent,column):
+-		if len(self.values)<2 or not self.values[-1][1]:return 0
+-		base=self.values[-1][0]if self.values[-1][0]is not _B else 0;return base+seq_indent-column-1
+-	def __len__(self):return len(self.values)
+-class Emitter:
+-	DEFAULT_TAG_PREFIXES={_G:_G,'tag:yaml.org,2002:':'!!'};MAX_SIMPLE_KEY_LENGTH=128
+-	def __init__(self,stream,canonical=_B,indent=_B,width=_B,allow_unicode=_B,line_break=_B,block_seq_indent=_B,top_level_colon_align=_B,prefix_colon=_B,brace_single_entry_mapping_in_flow_sequence=_B,dumper=_B):
+-		self.dumper=dumper
+-		if self.dumper is not _B and getattr(self.dumper,'_emitter',_B)is _B:self.dumper._emitter=self
+-		self.stream=stream;self.encoding=_B;self.allow_space_break=_B;self.states=[];self.state=self.expect_stream_start;self.events=[];self.event=_B;self.indents=Indents();self.indent=_B;self.flow_context=[];self.root_context=_C;self.sequence_context=_C;self.mapping_context=_C;self.simple_key_context=_C;self.line=0;self.column=0;self.whitespace=_A;self.indention=_A;self.compact_seq_seq=_A;self.compact_seq_map=_A;self.no_newline=_B;self.open_ended=_C;self.colon=':';self.prefixed_colon=self.colon if prefix_colon is _B else prefix_colon+self.colon;self.brace_single_entry_mapping_in_flow_sequence=brace_single_entry_mapping_in_flow_sequence;self.canonical=canonical;self.allow_unicode=allow_unicode;self.unicode_supplementary=sys.maxunicode>65535;self.sequence_dash_offset=block_seq_indent if block_seq_indent else 0;self.top_level_colon_align=top_level_colon_align;self.best_sequence_indent=2;self.requested_indent=indent
+-		if indent and 1<indent<10:self.best_sequence_indent=indent
+-		self.best_map_indent=self.best_sequence_indent;self.best_width=80
+-		if width and width>self.best_sequence_indent*2:self.best_width=width
+-		self.best_line_break=_E
+-		if line_break in['\r',_E,'\r\n']:self.best_line_break=line_break
+-		self.tag_prefixes=_B;self.prepared_anchor=_B;self.prepared_tag=_B;self.analysis=_B;self.style=_B;self.scalar_after_indicator=_A
+-	@property
+-	def stream(self):
+-		try:return self._stream
+-		except AttributeError:raise YAMLStreamError('output stream needs to specified')
+-	@stream.setter
+-	def stream(self,val):
+-		if val is _B:return
+-		if not hasattr(val,'write'):raise YAMLStreamError('stream argument needs to have a write() method')
+-		self._stream=val
+-	@property
+-	def serializer(self):
+-		try:
+-			if hasattr(self.dumper,'typ'):return self.dumper.serializer
+-			return self.dumper._serializer
+-		except AttributeError:return self
+-	@property
+-	def flow_level(self):return len(self.flow_context)
+-	def dispose(self):self.states=[];self.state=_B
+-	def emit(self,event):
+-		if dbg(DBG_EVENT):nprint(event)
+-		self.events.append(event)
+-		while not self.need_more_events():self.event=self.events.pop(0);self.state();self.event=_B
+-	def need_more_events(self):
+-		if not self.events:return _A
+-		event=self.events[0]
+-		if isinstance(event,DocumentStartEvent):return self.need_events(1)
+-		elif isinstance(event,SequenceStartEvent):return self.need_events(2)
+-		elif isinstance(event,MappingStartEvent):return self.need_events(3)
+-		else:return _C
+-	def need_events(self,count):
+-		level=0
+-		for event in self.events[1:]:
+-			if isinstance(event,(DocumentStartEvent,CollectionStartEvent)):level+=1
+-			elif isinstance(event,(DocumentEndEvent,CollectionEndEvent)):level-=1
+-			elif isinstance(event,StreamEndEvent):level=-1
+-			if level<0:return _C
+-		return len(self.events)<count+1
+-	def increase_indent(self,flow=_C,sequence=_B,indentless=_C):
+-		self.indents.append(self.indent,sequence)
+-		if self.indent is _B:
+-			if flow:self.indent=self.requested_indent
+-			else:self.indent=0
+-		elif not indentless:self.indent+=self.best_sequence_indent if self.indents.last_seq()else self.best_map_indent
+-	def expect_stream_start(self):
+-		A='encoding'
+-		if isinstance(self.event,StreamStartEvent):
+-			if PY2:
+-				if self.event.encoding and not getattr(self.stream,A,_B):self.encoding=self.event.encoding
+-			elif self.event.encoding and not hasattr(self.stream,A):self.encoding=self.event.encoding
+-			self.write_stream_start();self.state=self.expect_first_document_start
+-		else:raise EmitterError('expected StreamStartEvent, but got %s'%(self.event,))
+-	def expect_nothing(self):raise EmitterError('expected nothing, but got %s'%(self.event,))
+-	def expect_first_document_start(self):return self.expect_document_start(first=_A)
+-	def expect_document_start(self,first=_C):
+-		if isinstance(self.event,DocumentStartEvent):
+-			if(self.event.version or self.event.tags)and self.open_ended:self.write_indicator(_M,_A);self.write_indent()
+-			if self.event.version:version_text=self.prepare_version(self.event.version);self.write_version_directive(version_text)
+-			self.tag_prefixes=self.DEFAULT_TAG_PREFIXES.copy()
+-			if self.event.tags:
+-				handles=sorted(self.event.tags.keys())
+-				for handle in handles:prefix=self.event.tags[handle];self.tag_prefixes[prefix]=handle;handle_text=self.prepare_tag_handle(handle);prefix_text=self.prepare_tag_prefix(prefix);self.write_tag_directive(handle_text,prefix_text)
+-			implicit=first and not self.event.explicit and not self.canonical and not self.event.version and not self.event.tags and not self.check_empty_document()
+-			if not implicit:
+-				self.write_indent();self.write_indicator(_S,_A)
+-				if self.canonical:self.write_indent()
+-			self.state=self.expect_document_root
+-		elif isinstance(self.event,StreamEndEvent):
+-			if self.open_ended:self.write_indicator(_M,_A);self.write_indent()
+-			self.write_stream_end();self.state=self.expect_nothing
+-		else:raise EmitterError('expected DocumentStartEvent, but got %s'%(self.event,))
+-	def expect_document_end(self):
+-		if isinstance(self.event,DocumentEndEvent):
+-			self.write_indent()
+-			if self.event.explicit:self.write_indicator(_M,_A);self.write_indent()
+-			self.flush_stream();self.state=self.expect_document_start
+-		else:raise EmitterError('expected DocumentEndEvent, but got %s'%(self.event,))
+-	def expect_document_root(self):self.states.append(self.expect_document_end);self.expect_node(root=_A)
+-	def expect_node(self,root=_C,sequence=_C,mapping=_C,simple_key=_C):
+-		self.root_context=root;self.sequence_context=sequence;self.mapping_context=mapping;self.simple_key_context=simple_key
+-		if isinstance(self.event,AliasEvent):self.expect_alias()
+-		elif isinstance(self.event,(ScalarEvent,CollectionStartEvent)):
+-			if self.process_anchor('&')and isinstance(self.event,ScalarEvent)and self.sequence_context:self.sequence_context=_C
+-			if root and isinstance(self.event,ScalarEvent)and not self.scalar_after_indicator:self.write_indent()
+-			self.process_tag()
+-			if isinstance(self.event,ScalarEvent):self.expect_scalar()
+-			elif isinstance(self.event,SequenceStartEvent):
+-				i2,n2=self.indention,self.no_newline
+-				if self.event.comment:
+-					if self.event.flow_style is _C and self.event.comment:
+-						if self.write_post_comment(self.event):self.indention=_C;self.no_newline=_A
+-					if self.write_pre_comment(self.event):self.indention=i2;self.no_newline=not self.indention
+-				if self.flow_level or self.canonical or self.event.flow_style or self.check_empty_sequence():self.expect_flow_sequence()
+-				else:self.expect_block_sequence()
+-			elif isinstance(self.event,MappingStartEvent):
+-				if self.event.flow_style is _C and self.event.comment:self.write_post_comment(self.event)
+-				if self.event.comment and self.event.comment[1]:self.write_pre_comment(self.event)
+-				if self.flow_level or self.canonical or self.event.flow_style or self.check_empty_mapping():self.expect_flow_mapping(single=self.event.nr_items==1)
+-				else:self.expect_block_mapping()
+-		else:raise EmitterError('expected NodeEvent, but got %s'%(self.event,))
+-	def expect_alias(self):
+-		if self.event.anchor is _B:raise EmitterError('anchor is not specified for alias')
+-		self.process_anchor('*');self.state=self.states.pop()
+-	def expect_scalar(self):self.increase_indent(flow=_A);self.process_scalar();self.indent=self.indents.pop();self.state=self.states.pop()
+-	def expect_flow_sequence(self):ind=self.indents.seq_flow_align(self.best_sequence_indent,self.column);self.write_indicator(_D*ind+_K,_A,whitespace=_A);self.increase_indent(flow=_A,sequence=_A);self.flow_context.append(_K);self.state=self.expect_first_flow_sequence_item
+-	def expect_first_flow_sequence_item(self):
+-		if isinstance(self.event,SequenceEndEvent):
+-			self.indent=self.indents.pop();popped=self.flow_context.pop();assert popped==_K;self.write_indicator(']',_C)
+-			if self.event.comment and self.event.comment[0]:self.write_post_comment(self.event)
+-			elif self.flow_level==0:self.write_line_break()
+-			self.state=self.states.pop()
+-		else:
+-			if self.canonical or self.column>self.best_width:self.write_indent()
+-			self.states.append(self.expect_flow_sequence_item);self.expect_node(sequence=_A)
+-	def expect_flow_sequence_item(self):
+-		if isinstance(self.event,SequenceEndEvent):
+-			self.indent=self.indents.pop();popped=self.flow_context.pop();assert popped==_K
+-			if self.canonical:self.write_indicator(_N,_C);self.write_indent()
+-			self.write_indicator(']',_C)
+-			if self.event.comment and self.event.comment[0]:self.write_post_comment(self.event)
+-			else:self.no_newline=_C
+-			self.state=self.states.pop()
+-		else:
+-			self.write_indicator(_N,_C)
+-			if self.canonical or self.column>self.best_width:self.write_indent()
+-			self.states.append(self.expect_flow_sequence_item);self.expect_node(sequence=_A)
+-	def expect_flow_mapping(self,single=_C):
+-		ind=self.indents.seq_flow_align(self.best_sequence_indent,self.column);map_init='{'
+-		if single and self.flow_level and self.flow_context[-1]==_K and not self.canonical and not self.brace_single_entry_mapping_in_flow_sequence:map_init=''
+-		self.write_indicator(_D*ind+map_init,_A,whitespace=_A);self.flow_context.append(map_init);self.increase_indent(flow=_A,sequence=_C);self.state=self.expect_first_flow_mapping_key
+-	def expect_first_flow_mapping_key(self):
+-		if isinstance(self.event,MappingEndEvent):
+-			self.indent=self.indents.pop();popped=self.flow_context.pop();assert popped=='{';self.write_indicator('}',_C)
+-			if self.event.comment and self.event.comment[0]:self.write_post_comment(self.event)
+-			elif self.flow_level==0:self.write_line_break()
+-			self.state=self.states.pop()
+-		else:
+-			if self.canonical or self.column>self.best_width:self.write_indent()
+-			if not self.canonical and self.check_simple_key():self.states.append(self.expect_flow_mapping_simple_value);self.expect_node(mapping=_A,simple_key=_A)
+-			else:self.write_indicator(_I,_A);self.states.append(self.expect_flow_mapping_value);self.expect_node(mapping=_A)
+-	def expect_flow_mapping_key(self):
+-		if isinstance(self.event,MappingEndEvent):
+-			self.indent=self.indents.pop();popped=self.flow_context.pop();assert popped in['{','']
+-			if self.canonical:self.write_indicator(_N,_C);self.write_indent()
+-			if popped!='':self.write_indicator('}',_C)
+-			if self.event.comment and self.event.comment[0]:self.write_post_comment(self.event)
+-			else:self.no_newline=_C
+-			self.state=self.states.pop()
+-		else:
+-			self.write_indicator(_N,_C)
+-			if self.canonical or self.column>self.best_width:self.write_indent()
+-			if not self.canonical and self.check_simple_key():self.states.append(self.expect_flow_mapping_simple_value);self.expect_node(mapping=_A,simple_key=_A)
+-			else:self.write_indicator(_I,_A);self.states.append(self.expect_flow_mapping_value);self.expect_node(mapping=_A)
+-	def expect_flow_mapping_simple_value(self):self.write_indicator(self.prefixed_colon,_C);self.states.append(self.expect_flow_mapping_key);self.expect_node(mapping=_A)
+-	def expect_flow_mapping_value(self):
+-		if self.canonical or self.column>self.best_width:self.write_indent()
+-		self.write_indicator(self.prefixed_colon,_A);self.states.append(self.expect_flow_mapping_key);self.expect_node(mapping=_A)
+-	def expect_block_sequence(self):
+-		if self.mapping_context:indentless=not self.indention
+-		else:
+-			indentless=_C
+-			if not self.compact_seq_seq and self.column!=0:self.write_line_break()
+-		self.increase_indent(flow=_C,sequence=_A,indentless=indentless);self.state=self.expect_first_block_sequence_item
+-	def expect_first_block_sequence_item(self):return self.expect_block_sequence_item(first=_A)
+-	def expect_block_sequence_item(self,first=_C):
+-		if not first and isinstance(self.event,SequenceEndEvent):
+-			if self.event.comment and self.event.comment[1]:self.write_pre_comment(self.event)
+-			self.indent=self.indents.pop();self.state=self.states.pop();self.no_newline=_C
+-		else:
+-			if self.event.comment and self.event.comment[1]:self.write_pre_comment(self.event)
+-			nonl=self.no_newline if self.column==0 else _C;self.write_indent();ind=self.sequence_dash_offset;self.write_indicator(_D*ind+'-',_A,indention=_A)
+-			if nonl or self.sequence_dash_offset+2>self.best_sequence_indent:self.no_newline=_A
+-			self.states.append(self.expect_block_sequence_item);self.expect_node(sequence=_A)
+-	def expect_block_mapping(self):
+-		if not self.mapping_context and not(self.compact_seq_map or self.column==0):self.write_line_break()
+-		self.increase_indent(flow=_C,sequence=_C);self.state=self.expect_first_block_mapping_key
+-	def expect_first_block_mapping_key(self):return self.expect_block_mapping_key(first=_A)
+-	def expect_block_mapping_key(self,first=_C):
+-		if not first and isinstance(self.event,MappingEndEvent):
+-			if self.event.comment and self.event.comment[1]:self.write_pre_comment(self.event)
+-			self.indent=self.indents.pop();self.state=self.states.pop()
+-		else:
+-			if self.event.comment and self.event.comment[1]:self.write_pre_comment(self.event)
+-			self.write_indent()
+-			if self.check_simple_key():
+-				if not isinstance(self.event,(SequenceStartEvent,MappingStartEvent)):
+-					try:
+-						if self.event.style==_I:self.write_indicator(_I,_A,indention=_A)
+-					except AttributeError:pass
+-				self.states.append(self.expect_block_mapping_simple_value);self.expect_node(mapping=_A,simple_key=_A)
+-				if isinstance(self.event,AliasEvent):self.stream.write(_D)
+-			else:self.write_indicator(_I,_A,indention=_A);self.states.append(self.expect_block_mapping_value);self.expect_node(mapping=_A)
+-	def expect_block_mapping_simple_value(self):
+-		if getattr(self.event,'style',_B)!=_I:
+-			if self.indent==0 and self.top_level_colon_align is not _B:c=_D*(self.top_level_colon_align-self.column)+self.colon
+-			else:c=self.prefixed_colon
+-			self.write_indicator(c,_C)
+-		self.states.append(self.expect_block_mapping_key);self.expect_node(mapping=_A)
+-	def expect_block_mapping_value(self):self.write_indent();self.write_indicator(self.prefixed_colon,_A,indention=_A);self.states.append(self.expect_block_mapping_key);self.expect_node(mapping=_A)
+-	def check_empty_sequence(self):return isinstance(self.event,SequenceStartEvent)and bool(self.events)and isinstance(self.events[0],SequenceEndEvent)
+-	def check_empty_mapping(self):return isinstance(self.event,MappingStartEvent)and bool(self.events)and isinstance(self.events[0],MappingEndEvent)
+-	def check_empty_document(self):
+-		if not isinstance(self.event,DocumentStartEvent)or not self.events:return _C
+-		event=self.events[0];return isinstance(event,ScalarEvent)and event.anchor is _B and event.tag is _B and event.implicit and event.value==''
+-	def check_simple_key(self):
+-		length=0
+-		if isinstance(self.event,NodeEvent)and self.event.anchor is not _B:
+-			if self.prepared_anchor is _B:self.prepared_anchor=self.prepare_anchor(self.event.anchor)
+-			length+=len(self.prepared_anchor)
+-		if isinstance(self.event,(ScalarEvent,CollectionStartEvent))and self.event.tag is not _B:
+-			if self.prepared_tag is _B:self.prepared_tag=self.prepare_tag(self.event.tag)
+-			length+=len(self.prepared_tag)
+-		if isinstance(self.event,ScalarEvent):
+-			if self.analysis is _B:self.analysis=self.analyze_scalar(self.event.value)
+-			length+=len(self.analysis.scalar)
+-		return length<self.MAX_SIMPLE_KEY_LENGTH and(isinstance(self.event,AliasEvent)or isinstance(self.event,SequenceStartEvent)and self.event.flow_style is _A or isinstance(self.event,MappingStartEvent)and self.event.flow_style is _A or isinstance(self.event,ScalarEvent)and not(self.analysis.empty and self.style and self.style not in'\'"')and not self.analysis.multiline or self.check_empty_sequence()or self.check_empty_mapping())
+-	def process_anchor(self,indicator):
+-		if self.event.anchor is _B:self.prepared_anchor=_B;return _C
+-		if self.prepared_anchor is _B:self.prepared_anchor=self.prepare_anchor(self.event.anchor)
+-		if self.prepared_anchor:self.write_indicator(indicator+self.prepared_anchor,_A);self.no_newline=_C
+-		self.prepared_anchor=_B;return _A
+-	def process_tag(self):
+-		tag=self.event.tag
+-		if isinstance(self.event,ScalarEvent):
+-			if self.style is _B:self.style=self.choose_scalar_style()
+-			if(not self.canonical or tag is _B)and(self.style==''and self.event.implicit[0]or self.style!=''and self.event.implicit[1]):self.prepared_tag=_B;return
+-			if self.event.implicit[0]and tag is _B:tag=_G;self.prepared_tag=_B
+-		elif(not self.canonical or tag is _B)and self.event.implicit:self.prepared_tag=_B;return
+-		if tag is _B:raise EmitterError('tag is not specified')
+-		if self.prepared_tag is _B:self.prepared_tag=self.prepare_tag(tag)
+-		if self.prepared_tag:
+-			self.write_indicator(self.prepared_tag,_A)
+-			if self.sequence_context and not self.flow_level and isinstance(self.event,ScalarEvent):self.no_newline=_A
+-		self.prepared_tag=_B
+-	def choose_scalar_style(self):
+-		if self.analysis is _B:self.analysis=self.analyze_scalar(self.event.value)
+-		if self.event.style==_H or self.canonical:return _H
+-		if(not self.event.style or self.event.style==_I)and(self.event.implicit[0]or not self.event.implicit[2]):
+-			if not(self.simple_key_context and(self.analysis.empty or self.analysis.multiline))and(self.flow_level and self.analysis.allow_flow_plain or not self.flow_level and self.analysis.allow_block_plain):return''
+-		self.analysis.allow_block=_A
+-		if self.event.style and self.event.style in'|>':
+-			if not self.flow_level and not self.simple_key_context and self.analysis.allow_block:return self.event.style
+-		if not self.event.style and self.analysis.allow_double_quoted:
+-			if _J in self.event.value or _E in self.event.value:return _H
+-		if not self.event.style or self.event.style==_J:
+-			if self.analysis.allow_single_quoted and not(self.simple_key_context and self.analysis.multiline):return _J
+-		return _H
+-	def process_scalar(self):
+-		if self.analysis is _B:self.analysis=self.analyze_scalar(self.event.value)
+-		if self.style is _B:self.style=self.choose_scalar_style()
+-		split=not self.simple_key_context
+-		if self.sequence_context and not self.flow_level:self.write_indent()
+-		if self.style==_H:self.write_double_quoted(self.analysis.scalar,split)
+-		elif self.style==_J:self.write_single_quoted(self.analysis.scalar,split)
+-		elif self.style=='>':self.write_folded(self.analysis.scalar)
+-		elif self.style=='|':self.write_literal(self.analysis.scalar,self.event.comment)
+-		else:self.write_plain(self.analysis.scalar,split)
+-		self.analysis=_B;self.style=_B
+-		if self.event.comment:self.write_post_comment(self.event)
+-	def prepare_version(self,version):
+-		major,minor=version
+-		if major!=1:raise EmitterError('unsupported YAML version: %d.%d'%(major,minor))
+-		return'%d.%d'%(major,minor)
+-	def prepare_tag_handle(self,handle):
+-		if not handle:raise EmitterError('tag handle must not be empty')
+-		if handle[0]!=_G or handle[-1]!=_G:raise EmitterError("tag handle must start and end with '!': %r"%utf8(handle))
+-		for ch in handle[1:-1]:
+-			if not(_O<=ch<='9'or'A'<=ch<='Z'or _P<=ch<='z'or ch in'-_'):raise EmitterError('invalid character %r in the tag handle: %r'%(utf8(ch),utf8(handle)))
+-		return handle
+-	def prepare_tag_prefix(self,prefix):
+-		if not prefix:raise EmitterError('tag prefix must not be empty')
+-		chunks=[];start=end=0
+-		if prefix[0]==_G:end=1
+-		ch_set=_T
+-		if self.dumper:
+-			version=getattr(self.dumper,_U,(1,2))
+-			if version is _B or version>=(1,2):ch_set+='#'
+-		while end<len(prefix):
+-			ch=prefix[end]
+-			if _O<=ch<='9'or'A'<=ch<='Z'or _P<=ch<='z'or ch in ch_set:end+=1
+-			else:
+-				if start<end:chunks.append(prefix[start:end])
+-				start=end=end+1;data=utf8(ch)
+-				for ch in data:chunks.append(_V%ord(ch))
+-		if start<end:chunks.append(prefix[start:end])
+-		return ''.join(chunks)
+-	def prepare_tag(self,tag):
+-		if not tag:raise EmitterError('tag must not be empty')
+-		if tag==_G:return tag
+-		handle=_B;suffix=tag;prefixes=sorted(self.tag_prefixes.keys())
+-		for prefix in prefixes:
+-			if tag.startswith(prefix)and(prefix==_G or len(prefix)<len(tag)):handle=self.tag_prefixes[prefix];suffix=tag[len(prefix):]
+-		chunks=[];start=end=0;ch_set=_T
+-		if self.dumper:
+-			version=getattr(self.dumper,_U,(1,2))
+-			if version is _B or version>=(1,2):ch_set+='#'
+-		while end<len(suffix):
+-			ch=suffix[end]
+-			if _O<=ch<='9'or'A'<=ch<='Z'or _P<=ch<='z'or ch in ch_set or ch==_G and handle!=_G:end+=1
+-			else:
+-				if start<end:chunks.append(suffix[start:end])
+-				start=end=end+1;data=utf8(ch)
+-				for ch in data:chunks.append(_V%ord(ch))
+-		if start<end:chunks.append(suffix[start:end])
+-		suffix_text=''.join(chunks)
+-		if handle:return'%s%s'%(handle,suffix_text)
+-		else:return'!<%s>'%suffix_text
+-	def prepare_anchor(self,anchor):
+-		if not anchor:raise EmitterError('anchor must not be empty')
+-		for ch in anchor:
+-			if not check_anchorname_char(ch):raise EmitterError('invalid character %r in the anchor: %r'%(utf8(ch),utf8(anchor)))
+-		return anchor
+-	def analyze_scalar(self,scalar):
+-		A='\x00 \t\r\n\x85\u2028\u2029'
+-		if not scalar:return ScalarAnalysis(scalar=scalar,empty=_A,multiline=_C,allow_flow_plain=_C,allow_block_plain=_A,allow_single_quoted=_A,allow_double_quoted=_A,allow_block=_C)
+-		block_indicators=_C;flow_indicators=_C;line_breaks=_C;special_characters=_C;leading_space=_C;leading_break=_C;trailing_space=_C;trailing_break=_C;break_space=_C;space_break=_C
+-		if scalar.startswith(_S)or scalar.startswith(_M):block_indicators=_A;flow_indicators=_A
+-		preceeded_by_whitespace=_A;followed_by_whitespace=len(scalar)==1 or scalar[1]in A;previous_space=_C;previous_break=_C;index=0
+-		while index<len(scalar):
+-			ch=scalar[index]
+-			if index==0:
+-				if ch in'#,[]{}&*!|>\'"%@`':flow_indicators=_A;block_indicators=_A
+-				if ch in'?:':
+-					if self.serializer.use_version==(1,1):flow_indicators=_A
+-					elif len(scalar)==1:flow_indicators=_A
+-					if followed_by_whitespace:block_indicators=_A
+-				if ch=='-'and followed_by_whitespace:flow_indicators=_A;block_indicators=_A
+-			else:
+-				if ch in',[]{}':flow_indicators=_A
+-				if ch==_I and self.serializer.use_version==(1,1):flow_indicators=_A
+-				if ch==':':
+-					if followed_by_whitespace:flow_indicators=_A;block_indicators=_A
+-				if ch=='#'and preceeded_by_whitespace:flow_indicators=_A;block_indicators=_A
+-			if ch in _F:line_breaks=_A
+-			if not(ch==_E or _D<=ch<='~'):
+-				if(ch==_W or _Q<=ch<=_X or _Y<=ch<='�'or self.unicode_supplementary and'𐀀'<=ch<='\U0010ffff')and ch!=_Z:
+-					if not self.allow_unicode:special_characters=_A
+-				else:special_characters=_A
+-			if ch==_D:
+-				if index==0:leading_space=_A
+-				if index==len(scalar)-1:trailing_space=_A
+-				if previous_break:break_space=_A
+-				previous_space=_A;previous_break=_C
+-			elif ch in _F:
+-				if index==0:leading_break=_A
+-				if index==len(scalar)-1:trailing_break=_A
+-				if previous_space:space_break=_A
+-				previous_space=_C;previous_break=_A
+-			else:previous_space=_C;previous_break=_C
+-			index+=1;preceeded_by_whitespace=ch in A;followed_by_whitespace=index+1>=len(scalar)or scalar[index+1]in A
+-		allow_flow_plain=_A;allow_block_plain=_A;allow_single_quoted=_A;allow_double_quoted=_A;allow_block=_A
+-		if leading_space or leading_break or trailing_space or trailing_break:allow_flow_plain=allow_block_plain=_C
+-		if trailing_space:allow_block=_C
+-		if break_space:allow_flow_plain=allow_block_plain=allow_single_quoted=_C
+-		if special_characters:allow_flow_plain=allow_block_plain=allow_single_quoted=allow_block=_C
+-		elif space_break:
+-			allow_flow_plain=allow_block_plain=allow_single_quoted=_C
+-			if not self.allow_space_break:allow_block=_C
+-		if line_breaks:allow_flow_plain=allow_block_plain=_C
+-		if flow_indicators:allow_flow_plain=_C
+-		if block_indicators:allow_block_plain=_C
+-		return ScalarAnalysis(scalar=scalar,empty=_C,multiline=line_breaks,allow_flow_plain=allow_flow_plain,allow_block_plain=allow_block_plain,allow_single_quoted=allow_single_quoted,allow_double_quoted=allow_double_quoted,allow_block=allow_block)
+-	def flush_stream(self):
+-		if hasattr(self.stream,'flush'):self.stream.flush()
+-	def write_stream_start(self):
+-		if self.encoding and self.encoding.startswith('utf-16'):self.stream.write(_Z.encode(self.encoding))
+-	def write_stream_end(self):self.flush_stream()
+-	def write_indicator(self,indicator,need_whitespace,whitespace=_C,indention=_C):
+-		if self.whitespace or not need_whitespace:data=indicator
+-		else:data=_D+indicator
+-		self.whitespace=whitespace;self.indention=self.indention and indention;self.column+=len(data);self.open_ended=_C
+-		if bool(self.encoding):data=data.encode(self.encoding)
+-		self.stream.write(data)
+-	def write_indent(self):
+-		indent=self.indent or 0
+-		if not self.indention or self.column>indent or self.column==indent and not self.whitespace:
+-			if bool(self.no_newline):self.no_newline=_C
+-			else:self.write_line_break()
+-		if self.column<indent:
+-			self.whitespace=_A;data=_D*(indent-self.column);self.column=indent
+-			if self.encoding:data=data.encode(self.encoding)
+-			self.stream.write(data)
+-	def write_line_break(self,data=_B):
+-		if data is _B:data=self.best_line_break
+-		self.whitespace=_A;self.indention=_A;self.line+=1;self.column=0
+-		if bool(self.encoding):data=data.encode(self.encoding)
+-		self.stream.write(data)
+-	def write_version_directive(self,version_text):
+-		data='%%YAML %s'%version_text
+-		if self.encoding:data=data.encode(self.encoding)
+-		self.stream.write(data);self.write_line_break()
+-	def write_tag_directive(self,handle_text,prefix_text):
+-		data='%%TAG %s %s'%(handle_text,prefix_text)
+-		if self.encoding:data=data.encode(self.encoding)
+-		self.stream.write(data);self.write_line_break()
+-	def write_single_quoted(self,text,split=_A):
+-		if self.root_context:
+-			if self.requested_indent is not _B:
+-				self.write_line_break()
+-				if self.requested_indent!=0:self.write_indent()
+-		self.write_indicator(_J,_A);spaces=_C;breaks=_C;start=end=0
+-		while end<=len(text):
+-			ch=_B
+-			if end<len(text):ch=text[end]
+-			if spaces:
+-				if ch is _B or ch!=_D:
+-					if start+1==end and self.column>self.best_width and split and start!=0 and end!=len(text):self.write_indent()
+-					else:
+-						data=text[start:end];self.column+=len(data)
+-						if bool(self.encoding):data=data.encode(self.encoding)
+-						self.stream.write(data)
+-					start=end
+-			elif breaks:
+-				if ch is _B or ch not in _F:
+-					if text[start]==_E:self.write_line_break()
+-					for br in text[start:end]:
+-						if br==_E:self.write_line_break()
+-						else:self.write_line_break(br)
+-					self.write_indent();start=end
+-			elif ch is _B or ch in _R or ch==_J:
+-				if start<end:
+-					data=text[start:end];self.column+=len(data)
+-					if bool(self.encoding):data=data.encode(self.encoding)
+-					self.stream.write(data);start=end
+-			if ch==_J:
+-				data="''";self.column+=2
+-				if bool(self.encoding):data=data.encode(self.encoding)
+-				self.stream.write(data);start=end+1
+-			if ch is not _B:spaces=ch==_D;breaks=ch in _F
+-			end+=1
+-		self.write_indicator(_J,_C)
+-	ESCAPE_REPLACEMENTS={'\x00':_O,_a:_P,'\x08':'b','\t':'t',_E:'n','\x0b':'v','\x0c':'f','\r':'r','\x1b':'e',_H:_H,_L:_L,_W:'N',_Q:'_','\u2028':'L','\u2029':'P'}
+-	def write_double_quoted(self,text,split=_A):
+-		if self.root_context:
+-			if self.requested_indent is not _B:
+-				self.write_line_break()
+-				if self.requested_indent!=0:self.write_indent()
+-		self.write_indicator(_H,_A);start=end=0
+-		while end<=len(text):
+-			ch=_B
+-			if end<len(text):ch=text[end]
+-			if ch is _B or ch in'"\\\x85\u2028\u2029\ufeff'or not(_D<=ch<='~'or self.allow_unicode and(_Q<=ch<=_X or _Y<=ch<='�')):
+-				if start<end:
+-					data=text[start:end];self.column+=len(data)
+-					if bool(self.encoding):data=data.encode(self.encoding)
+-					self.stream.write(data);start=end
+-				if ch is not _B:
+-					if ch in self.ESCAPE_REPLACEMENTS:data=_L+self.ESCAPE_REPLACEMENTS[ch]
+-					elif ch<='ÿ':data='\\x%02X'%ord(ch)
+-					elif ch<='\uffff':data='\\u%04X'%ord(ch)
+-					else:data='\\U%08X'%ord(ch)
+-					self.column+=len(data)
+-					if bool(self.encoding):data=data.encode(self.encoding)
+-					self.stream.write(data);start=end+1
+-			if 0<end<len(text)-1 and(ch==_D or start>=end)and self.column+(end-start)>self.best_width and split:
+-				data=text[start:end]+_L
+-				if start<end:start=end
+-				self.column+=len(data)
+-				if bool(self.encoding):data=data.encode(self.encoding)
+-				self.stream.write(data);self.write_indent();self.whitespace=_C;self.indention=_C
+-				if text[start]==_D:
+-					data=_L;self.column+=len(data)
+-					if bool(self.encoding):data=data.encode(self.encoding)
+-					self.stream.write(data)
+-			end+=1
+-		self.write_indicator(_H,_C)
+-	def determine_block_hints(self,text):
+-		indent=0;indicator='';hints=''
+-		if text:
+-			if text[0]in _R:indent=self.best_sequence_indent;hints+=text_type(indent)
+-			elif self.root_context:
+-				for end in ['\n---','\n...']:
+-					pos=0
+-					while _A:
+-						pos=text.find(end,pos)
+-						if pos==-1:break
+-						try:
+-							if text[pos+4]in' \r\n':break
+-						except IndexError:pass
+-						pos+=1
+-					if pos>-1:break
+-				if pos>0:indent=self.best_sequence_indent
+-			if text[-1]not in _F:indicator='-'
+-			elif len(text)==1 or text[-2]in _F:indicator='+'
+-		hints+=indicator;return hints,indent,indicator
+-	def write_folded(self,text):
+-		hints,_indent,_indicator=self.determine_block_hints(text);self.write_indicator('>'+hints,_A)
+-		if _indicator=='+':self.open_ended=_A
+-		self.write_line_break();leading_space=_A;spaces=_C;breaks=_A;start=end=0
+-		while end<=len(text):
+-			ch=_B
+-			if end<len(text):ch=text[end]
+-			if breaks:
+-				if ch is _B or ch not in'\n\x85\u2028\u2029\x07':
+-					if not leading_space and ch is not _B and ch!=_D and text[start]==_E:self.write_line_break()
+-					leading_space=ch==_D
+-					for br in text[start:end]:
+-						if br==_E:self.write_line_break()
+-						else:self.write_line_break(br)
+-					if ch is not _B:self.write_indent()
+-					start=end
+-			elif spaces:
+-				if ch!=_D:
+-					if start+1==end and self.column>self.best_width:self.write_indent()
+-					else:
+-						data=text[start:end];self.column+=len(data)
+-						if bool(self.encoding):data=data.encode(self.encoding)
+-						self.stream.write(data)
+-					start=end
+-			elif ch is _B or ch in' \n\x85\u2028\u2029\x07':
+-				data=text[start:end];self.column+=len(data)
+-				if bool(self.encoding):data=data.encode(self.encoding)
+-				self.stream.write(data)
+-				if ch==_a:
+-					if end<len(text)-1 and not text[end+2].isspace():self.write_line_break();self.write_indent();end+=2
+-					else:raise EmitterError('unexcpected fold indicator \\a before space')
+-				if ch is _B:self.write_line_break()
+-				start=end
+-			if ch is not _B:breaks=ch in _F;spaces=ch==_D
+-			end+=1
+-	def write_literal(self,text,comment=_B):
+-		hints,_indent,_indicator=self.determine_block_hints(text);self.write_indicator('|'+hints,_A)
+-		try:
+-			comment=comment[1][0]
+-			if comment:self.stream.write(comment)
+-		except (TypeError,IndexError):pass
+-		if _indicator=='+':self.open_ended=_A
+-		self.write_line_break();breaks=_A;start=end=0
+-		while end<=len(text):
+-			ch=_B
+-			if end<len(text):ch=text[end]
+-			if breaks:
+-				if ch is _B or ch not in _F:
+-					for br in text[start:end]:
+-						if br==_E:self.write_line_break()
+-						else:self.write_line_break(br)
+-					if ch is not _B:
+-						if self.root_context:idnx=self.indent if self.indent is not _B else 0;self.stream.write(_D*(_indent+idnx))
+-						else:self.write_indent()
+-					start=end
+-			elif ch is _B or ch in _F:
+-				data=text[start:end]
+-				if bool(self.encoding):data=data.encode(self.encoding)
+-				self.stream.write(data)
+-				if ch is _B:self.write_line_break()
+-				start=end
+-			if ch is not _B:breaks=ch in _F
+-			end+=1
+-	def write_plain(self,text,split=_A):
+-		if self.root_context:
+-			if self.requested_indent is not _B:
+-				self.write_line_break()
+-				if self.requested_indent!=0:self.write_indent()
+-			else:self.open_ended=_A
+-		if not text:return
+-		if not self.whitespace:
+-			data=_D;self.column+=len(data)
+-			if self.encoding:data=data.encode(self.encoding)
+-			self.stream.write(data)
+-		self.whitespace=_C;self.indention=_C;spaces=_C;breaks=_C;start=end=0
+-		while end<=len(text):
+-			ch=_B
+-			if end<len(text):ch=text[end]
+-			if spaces:
+-				if ch!=_D:
+-					if start+1==end and self.column>self.best_width and split:self.write_indent();self.whitespace=_C;self.indention=_C
+-					else:
+-						data=text[start:end];self.column+=len(data)
+-						if self.encoding:data=data.encode(self.encoding)
+-						self.stream.write(data)
+-					start=end
+-			elif breaks:
+-				if ch not in _F:
+-					if text[start]==_E:self.write_line_break()
+-					for br in text[start:end]:
+-						if br==_E:self.write_line_break()
+-						else:self.write_line_break(br)
+-					self.write_indent();self.whitespace=_C;self.indention=_C;start=end
+-			elif ch is _B or ch in _R:
+-				data=text[start:end];self.column+=len(data)
+-				if self.encoding:data=data.encode(self.encoding)
+-				try:self.stream.write(data)
+-				except:sys.stdout.write(repr(data)+_E);raise
+-				start=end
+-			if ch is not _B:spaces=ch==_D;breaks=ch in _F
+-			end+=1
+-	def write_comment(self,comment,pre=_C):
+-		value=comment.value
+-		if not pre and value[-1]==_E:value=value[:-1]
+-		try:
+-			col=comment.start_mark.column
+-			if comment.value and comment.value.startswith(_E):col=self.column
+-			elif col<self.column+1:ValueError
+-		except ValueError:col=self.column+1
+-		try:
+-			nr_spaces=col-self.column
+-			if self.column and value.strip()and nr_spaces<1 and value[0]!=_E:nr_spaces=1
+-			value=_D*nr_spaces+value
+-			try:
+-				if bool(self.encoding):value=value.encode(self.encoding)
+-			except UnicodeDecodeError:pass
+-			self.stream.write(value)
+-		except TypeError:raise
+-		if not pre:self.write_line_break()
+-	def write_pre_comment(self,event):
+-		comments=event.comment[1]
+-		if comments is _B:return _C
+-		try:
+-			start_events=MappingStartEvent,SequenceStartEvent
+-			for comment in comments:
+-				if isinstance(event,start_events)and getattr(comment,'pre_done',_B):continue
+-				if self.column!=0:self.write_line_break()
+-				self.write_comment(comment,pre=_A)
+-				if isinstance(event,start_events):comment.pre_done=_A
+-		except TypeError:sys.stdout.write('eventtt {} {}'.format(type(event),event));raise
+-		return _A
+-	def write_post_comment(self,event):
+-		if self.event.comment[0]is _B:return _C
+-		comment=event.comment[0];self.write_comment(comment);return _A
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/error.py b/dynaconf/vendor/ruamel/yaml/error.py
+deleted file mode 100644
+index 52652cc..0000000
+--- a/dynaconf/vendor/ruamel/yaml/error.py
++++ /dev/null
+@@ -1,90 +0,0 @@
+-from __future__ import absolute_import
+-_I='once'
+-_H='  in "%s", line %d, column %d'
+-_G='line'
+-_F='index'
+-_E='name'
+-_D='column'
+-_C=False
+-_B='\n'
+-_A=None
+-import warnings,textwrap
+-from .compat import utf8
+-if _C:from typing import Any,Dict,Optional,List,Text
+-__all__=['FileMark','StringMark','CommentMark','YAMLError','MarkedYAMLError','ReusedAnchorWarning','UnsafeLoaderWarning','MarkedYAMLWarning','MarkedYAMLFutureWarning']
+-class StreamMark:
+-	__slots__=_E,_F,_G,_D
+-	def __init__(A,name,index,line,column):A.name=name;A.index=index;A.line=line;A.column=column
+-	def __str__(A):B=_H%(A.name,A.line+1,A.column+1);return B
+-	def __eq__(A,other):
+-		B=other
+-		if A.line!=B.line or A.column!=B.column:return _C
+-		if A.name!=B.name or A.index!=B.index:return _C
+-		return True
+-	def __ne__(A,other):return not A.__eq__(other)
+-class FileMark(StreamMark):__slots__=()
+-class StringMark(StreamMark):
+-	__slots__=_E,_F,_G,_D,'buffer','pointer'
+-	def __init__(A,name,index,line,column,buffer,pointer):StreamMark.__init__(A,name,index,line,column);A.buffer=buffer;A.pointer=pointer
+-	def get_snippet(A,indent=4,max_length=75):
+-		L=' ';K=' ... ';J='\x00\r\n\x85\u2028\u2029';F=max_length;E=indent
+-		if A.buffer is _A:return _A
+-		D='';B=A.pointer
+-		while B>0 and A.buffer[B-1]not in J:
+-			B-=1
+-			if A.pointer-B>F/2-1:D=K;B+=5;break
+-		G='';C=A.pointer
+-		while C<len(A.buffer)and A.buffer[C]not in J:
+-			C+=1
+-			if C-A.pointer>F/2-1:G=K;C-=5;break
+-		I=utf8(A.buffer[B:C]);H='^';H='^ (line: {})'.format(A.line+1);return L*E+D+I+G+_B+L*(E+A.pointer-B+len(D))+H
+-	def __str__(A):
+-		B=A.get_snippet();C=_H%(A.name,A.line+1,A.column+1)
+-		if B is not _A:C+=':\n'+B
+-		return C
+-class CommentMark:
+-	__slots__=_D,
+-	def __init__(A,column):A.column=column
+-class YAMLError(Exception):0
+-class MarkedYAMLError(YAMLError):
+-	def __init__(A,context=_A,context_mark=_A,problem=_A,problem_mark=_A,note=_A,warn=_A):A.context=context;A.context_mark=context_mark;A.problem=problem;A.problem_mark=problem_mark;A.note=note
+-	def __str__(A):
+-		B=[]
+-		if A.context is not _A:B.append(A.context)
+-		if A.context_mark is not _A and(A.problem is _A or A.problem_mark is _A or A.context_mark.name!=A.problem_mark.name or A.context_mark.line!=A.problem_mark.line or A.context_mark.column!=A.problem_mark.column):B.append(str(A.context_mark))
+-		if A.problem is not _A:B.append(A.problem)
+-		if A.problem_mark is not _A:B.append(str(A.problem_mark))
+-		if A.note is not _A and A.note:C=textwrap.dedent(A.note);B.append(C)
+-		return _B.join(B)
+-class YAMLStreamError(Exception):0
+-class YAMLWarning(Warning):0
+-class MarkedYAMLWarning(YAMLWarning):
+-	def __init__(A,context=_A,context_mark=_A,problem=_A,problem_mark=_A,note=_A,warn=_A):A.context=context;A.context_mark=context_mark;A.problem=problem;A.problem_mark=problem_mark;A.note=note;A.warn=warn
+-	def __str__(A):
+-		B=[]
+-		if A.context is not _A:B.append(A.context)
+-		if A.context_mark is not _A and(A.problem is _A or A.problem_mark is _A or A.context_mark.name!=A.problem_mark.name or A.context_mark.line!=A.problem_mark.line or A.context_mark.column!=A.problem_mark.column):B.append(str(A.context_mark))
+-		if A.problem is not _A:B.append(A.problem)
+-		if A.problem_mark is not _A:B.append(str(A.problem_mark))
+-		if A.note is not _A and A.note:C=textwrap.dedent(A.note);B.append(C)
+-		if A.warn is not _A and A.warn:D=textwrap.dedent(A.warn);B.append(D)
+-		return _B.join(B)
+-class ReusedAnchorWarning(YAMLWarning):0
+-class UnsafeLoaderWarning(YAMLWarning):text="\nThe default 'Loader' for 'load(stream)' without further arguments can be unsafe.\nUse 'load(stream, Loader=ruamel.yaml.Loader)' explicitly if that is OK.\nAlternatively include the following in your code:\n\n  import warnings\n  warnings.simplefilter('ignore', ruamel.yaml.error.UnsafeLoaderWarning)\n\nIn most other cases you should consider using 'safe_load(stream)'"
+-warnings.simplefilter(_I,UnsafeLoaderWarning)
+-class MantissaNoDotYAML1_1Warning(YAMLWarning):
+-	def __init__(A,node,flt_str):A.node=node;A.flt=flt_str
+-	def __str__(A):B=A.node.start_mark.line;C=A.node.start_mark.column;return '\nIn YAML 1.1 floating point values should have a dot (\'.\') in their mantissa.\nSee the Floating-Point Language-Independent Type for YAML™ Version 1.1 specification\n( http://yaml.org/type/float.html ). This dot is not required for JSON nor for YAML 1.2\n\nCorrect your float: "{}" on line: {}, column: {}\n\nor alternatively include the following in your code:\n\n  import warnings\n  warnings.simplefilter(\'ignore\', ruamel.yaml.error.MantissaNoDotYAML1_1Warning)\n\n'.format(A.flt,B,C)
+-warnings.simplefilter(_I,MantissaNoDotYAML1_1Warning)
+-class YAMLFutureWarning(Warning):0
+-class MarkedYAMLFutureWarning(YAMLFutureWarning):
+-	def __init__(A,context=_A,context_mark=_A,problem=_A,problem_mark=_A,note=_A,warn=_A):A.context=context;A.context_mark=context_mark;A.problem=problem;A.problem_mark=problem_mark;A.note=note;A.warn=warn
+-	def __str__(A):
+-		B=[]
+-		if A.context is not _A:B.append(A.context)
+-		if A.context_mark is not _A and(A.problem is _A or A.problem_mark is _A or A.context_mark.name!=A.problem_mark.name or A.context_mark.line!=A.problem_mark.line or A.context_mark.column!=A.problem_mark.column):B.append(str(A.context_mark))
+-		if A.problem is not _A:B.append(A.problem)
+-		if A.problem_mark is not _A:B.append(str(A.problem_mark))
+-		if A.note is not _A and A.note:C=textwrap.dedent(A.note);B.append(C)
+-		if A.warn is not _A and A.warn:D=textwrap.dedent(A.warn);B.append(D)
+-		return _B.join(B)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/events.py b/dynaconf/vendor/ruamel/yaml/events.py
+deleted file mode 100644
+index 8c1356e..0000000
+--- a/dynaconf/vendor/ruamel/yaml/events.py
++++ /dev/null
+@@ -1,45 +0,0 @@
+-_H='explicit'
+-_G='style'
+-_F='flow_style'
+-_E='value'
+-_D='anchor'
+-_C='implicit'
+-_B='tag'
+-_A=None
+-if False:from typing import Any,Dict,Optional,List
+-def CommentCheck():0
+-class Event:
+-	__slots__='start_mark','end_mark','comment'
+-	def __init__(A,start_mark=_A,end_mark=_A,comment=CommentCheck):
+-		B=comment;A.start_mark=start_mark;A.end_mark=end_mark
+-		if B is CommentCheck:B=_A
+-		A.comment=B
+-	def __repr__(A):
+-		C=[B for B in[_D,_B,_C,_E,_F,_G]if hasattr(A,B)];B=', '.join(['%s=%r'%(B,getattr(A,B))for B in C])
+-		if A.comment not in[_A,CommentCheck]:B+=', comment={!r}'.format(A.comment)
+-		return'%s(%s)'%(A.__class__.__name__,B)
+-class NodeEvent(Event):
+-	__slots__=_D,
+-	def __init__(A,anchor,start_mark=_A,end_mark=_A,comment=_A):Event.__init__(A,start_mark,end_mark,comment);A.anchor=anchor
+-class CollectionStartEvent(NodeEvent):
+-	__slots__=_B,_C,_F,'nr_items'
+-	def __init__(A,anchor,tag,implicit,start_mark=_A,end_mark=_A,flow_style=_A,comment=_A,nr_items=_A):NodeEvent.__init__(A,anchor,start_mark,end_mark,comment);A.tag=tag;A.implicit=implicit;A.flow_style=flow_style;A.nr_items=nr_items
+-class CollectionEndEvent(Event):__slots__=()
+-class StreamStartEvent(Event):
+-	__slots__='encoding',
+-	def __init__(A,start_mark=_A,end_mark=_A,encoding=_A,comment=_A):Event.__init__(A,start_mark,end_mark,comment);A.encoding=encoding
+-class StreamEndEvent(Event):__slots__=()
+-class DocumentStartEvent(Event):
+-	__slots__=_H,'version','tags'
+-	def __init__(A,start_mark=_A,end_mark=_A,explicit=_A,version=_A,tags=_A,comment=_A):Event.__init__(A,start_mark,end_mark,comment);A.explicit=explicit;A.version=version;A.tags=tags
+-class DocumentEndEvent(Event):
+-	__slots__=_H,
+-	def __init__(A,start_mark=_A,end_mark=_A,explicit=_A,comment=_A):Event.__init__(A,start_mark,end_mark,comment);A.explicit=explicit
+-class AliasEvent(NodeEvent):__slots__=()
+-class ScalarEvent(NodeEvent):
+-	__slots__=_B,_C,_E,_G
+-	def __init__(A,anchor,tag,implicit,value,start_mark=_A,end_mark=_A,style=_A,comment=_A):NodeEvent.__init__(A,anchor,start_mark,end_mark,comment);A.tag=tag;A.implicit=implicit;A.value=value;A.style=style
+-class SequenceStartEvent(CollectionStartEvent):__slots__=()
+-class SequenceEndEvent(CollectionEndEvent):__slots__=()
+-class MappingStartEvent(CollectionStartEvent):__slots__=()
+-class MappingEndEvent(CollectionEndEvent):__slots__=()
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/loader.py b/dynaconf/vendor/ruamel/yaml/loader.py
+deleted file mode 100644
+index 4c74755..0000000
+--- a/dynaconf/vendor/ruamel/yaml/loader.py
++++ /dev/null
+@@ -1,18 +0,0 @@
+-from __future__ import absolute_import
+-_A=None
+-from .reader import Reader
+-from .scanner import Scanner,RoundTripScanner
+-from .parser import Parser,RoundTripParser
+-from .composer import Composer
+-from .constructor import BaseConstructor,SafeConstructor,Constructor,RoundTripConstructor
+-from .resolver import VersionedResolver
+-if False:from typing import Any,Dict,List,Union,Optional;from .compat import StreamTextType,VersionType
+-__all__=['BaseLoader','SafeLoader','Loader','RoundTripLoader']
+-class BaseLoader(Reader,Scanner,Parser,Composer,BaseConstructor,VersionedResolver):
+-	def __init__(A,stream,version=_A,preserve_quotes=_A):Reader.__init__(A,stream,loader=A);Scanner.__init__(A,loader=A);Parser.__init__(A,loader=A);Composer.__init__(A,loader=A);BaseConstructor.__init__(A,loader=A);VersionedResolver.__init__(A,version,loader=A)
+-class SafeLoader(Reader,Scanner,Parser,Composer,SafeConstructor,VersionedResolver):
+-	def __init__(A,stream,version=_A,preserve_quotes=_A):Reader.__init__(A,stream,loader=A);Scanner.__init__(A,loader=A);Parser.__init__(A,loader=A);Composer.__init__(A,loader=A);SafeConstructor.__init__(A,loader=A);VersionedResolver.__init__(A,version,loader=A)
+-class Loader(Reader,Scanner,Parser,Composer,Constructor,VersionedResolver):
+-	def __init__(A,stream,version=_A,preserve_quotes=_A):Reader.__init__(A,stream,loader=A);Scanner.__init__(A,loader=A);Parser.__init__(A,loader=A);Composer.__init__(A,loader=A);Constructor.__init__(A,loader=A);VersionedResolver.__init__(A,version,loader=A)
+-class RoundTripLoader(Reader,RoundTripScanner,RoundTripParser,Composer,RoundTripConstructor,VersionedResolver):
+-	def __init__(A,stream,version=_A,preserve_quotes=_A):Reader.__init__(A,stream,loader=A);RoundTripScanner.__init__(A,loader=A);RoundTripParser.__init__(A,loader=A);Composer.__init__(A,loader=A);RoundTripConstructor.__init__(A,preserve_quotes=preserve_quotes,loader=A);VersionedResolver.__init__(A,version,loader=A)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/main.py b/dynaconf/vendor/ruamel/yaml/main.py
+deleted file mode 100644
+index acd2e93..0000000
+--- a/dynaconf/vendor/ruamel/yaml/main.py
++++ /dev/null
+@@ -1,462 +0,0 @@
+-from __future__ import absolute_import,unicode_literals,print_function
+-_Q='_emitter'
+-_P='_serializer'
+-_O='write'
+-_N='{}.dump(_all) takes two positional argument but at least three were given ({!r})'
+-_M='read'
+-_L='_stream'
+-_K='typ'
+-_J='utf-8'
+-_I='base'
+-_H='{}.__init__() takes no positional argument but at least one was given ({!r})'
+-_G='yaml_tag'
+-_F='open'
+-_E='rt'
+-_D='_'
+-_C=True
+-_B=False
+-_A=None
+-import sys,os,warnings,glob
+-from importlib import import_module
+-import dynaconf.vendor.ruamel as ruamel
+-from .error import UnsafeLoaderWarning,YAMLError
+-from .tokens import *
+-from .events import *
+-from .nodes import *
+-from .loader import BaseLoader,SafeLoader,Loader,RoundTripLoader
+-from .dumper import BaseDumper,SafeDumper,Dumper,RoundTripDumper
+-from .compat import StringIO,BytesIO,with_metaclass,PY3,nprint
+-from .resolver import VersionedResolver,Resolver
+-from .representer import BaseRepresenter,SafeRepresenter,Representer,RoundTripRepresenter
+-from .constructor import BaseConstructor,SafeConstructor,Constructor,RoundTripConstructor
+-from .loader import Loader as UnsafeLoader
+-if _B:
+-	from typing import List,Set,Dict,Union,Any,Callable,Optional,Text;from .compat import StreamType,StreamTextType,VersionType
+-	if PY3:from pathlib import Path
+-	else:Path=Any
+-try:from _ruamel_yaml import CParser,CEmitter
+-except:CParser=CEmitter=_A
+-enforce=object()
+-class YAML:
+-	def __init__(self,_kw=enforce,typ=_A,pure=_B,output=_A,plug_ins=_A):
+-		if _kw is not enforce:raise TypeError(_H.format(self.__class__.__name__,_kw))
+-		self.typ=[_E]if typ is _A else typ if isinstance(typ,list)else[typ];self.pure=pure;self._output=output;self._context_manager=_A;self.plug_ins=[]
+-		for pu in ([]if plug_ins is _A else plug_ins)+self.official_plug_ins():file_name=pu.replace(os.sep,'.');self.plug_ins.append(import_module(file_name))
+-		self.Resolver=ruamel.yaml.resolver.VersionedResolver;self.allow_unicode=_C;self.Reader=_A;self.Representer=_A;self.Constructor=_A;self.Scanner=_A;self.Serializer=_A;self.default_flow_style=_A;typ_found=1;setup_rt=_B
+-		if _E in self.typ:setup_rt=_C
+-		elif'safe'in self.typ:self.Emitter=ruamel.yaml.emitter.Emitter if pure or CEmitter is _A else CEmitter;self.Representer=ruamel.yaml.representer.SafeRepresenter;self.Parser=ruamel.yaml.parser.Parser if pure or CParser is _A else CParser;self.Composer=ruamel.yaml.composer.Composer;self.Constructor=ruamel.yaml.constructor.SafeConstructor
+-		elif _I in self.typ:self.Emitter=ruamel.yaml.emitter.Emitter;self.Representer=ruamel.yaml.representer.BaseRepresenter;self.Parser=ruamel.yaml.parser.Parser if pure or CParser is _A else CParser;self.Composer=ruamel.yaml.composer.Composer;self.Constructor=ruamel.yaml.constructor.BaseConstructor
+-		elif'unsafe'in self.typ:self.Emitter=ruamel.yaml.emitter.Emitter if pure or CEmitter is _A else CEmitter;self.Representer=ruamel.yaml.representer.Representer;self.Parser=ruamel.yaml.parser.Parser if pure or CParser is _A else CParser;self.Composer=ruamel.yaml.composer.Composer;self.Constructor=ruamel.yaml.constructor.Constructor
+-		else:setup_rt=_C;typ_found=0
+-		if setup_rt:self.default_flow_style=_B;self.Emitter=ruamel.yaml.emitter.Emitter;self.Serializer=ruamel.yaml.serializer.Serializer;self.Representer=ruamel.yaml.representer.RoundTripRepresenter;self.Scanner=ruamel.yaml.scanner.RoundTripScanner;self.Parser=ruamel.yaml.parser.RoundTripParser;self.Composer=ruamel.yaml.composer.Composer;self.Constructor=ruamel.yaml.constructor.RoundTripConstructor
+-		del setup_rt;self.stream=_A;self.canonical=_A;self.old_indent=_A;self.width=_A;self.line_break=_A;self.map_indent=_A;self.sequence_indent=_A;self.sequence_dash_offset=0;self.compact_seq_seq=_A;self.compact_seq_map=_A;self.sort_base_mapping_type_on_output=_A;self.top_level_colon_align=_A;self.prefix_colon=_A;self.version=_A;self.preserve_quotes=_A;self.allow_duplicate_keys=_B;self.encoding=_J;self.explicit_start=_A;self.explicit_end=_A;self.tags=_A;self.default_style=_A;self.top_level_block_style_scalar_no_indent_error_1_1=_B;self.scalar_after_indicator=_A;self.brace_single_entry_mapping_in_flow_sequence=_B
+-		for module in self.plug_ins:
+-			if getattr(module,_K,_A)in self.typ:typ_found+=1;module.init_typ(self);break
+-		if typ_found==0:raise NotImplementedError('typ "{}"not recognised (need to install plug-in?)'.format(self.typ))
+-	@property
+-	def reader(self):
+-		try:return self._reader
+-		except AttributeError:self._reader=self.Reader(_A,loader=self);return self._reader
+-	@property
+-	def scanner(self):
+-		try:return self._scanner
+-		except AttributeError:self._scanner=self.Scanner(loader=self);return self._scanner
+-	@property
+-	def parser(self):
+-		attr=_D+sys._getframe().f_code.co_name
+-		if not hasattr(self,attr):
+-			if self.Parser is not CParser:setattr(self,attr,self.Parser(loader=self))
+-			elif getattr(self,_L,_A)is _A:return _A
+-			else:setattr(self,attr,CParser(self._stream))
+-		return getattr(self,attr)
+-	@property
+-	def composer(self):
+-		attr=_D+sys._getframe().f_code.co_name
+-		if not hasattr(self,attr):setattr(self,attr,self.Composer(loader=self))
+-		return getattr(self,attr)
+-	@property
+-	def constructor(self):
+-		attr=_D+sys._getframe().f_code.co_name
+-		if not hasattr(self,attr):cnst=self.Constructor(preserve_quotes=self.preserve_quotes,loader=self);cnst.allow_duplicate_keys=self.allow_duplicate_keys;setattr(self,attr,cnst)
+-		return getattr(self,attr)
+-	@property
+-	def resolver(self):
+-		attr=_D+sys._getframe().f_code.co_name
+-		if not hasattr(self,attr):setattr(self,attr,self.Resolver(version=self.version,loader=self))
+-		return getattr(self,attr)
+-	@property
+-	def emitter(self):
+-		attr=_D+sys._getframe().f_code.co_name
+-		if not hasattr(self,attr):
+-			if self.Emitter is not CEmitter:
+-				_emitter=self.Emitter(_A,canonical=self.canonical,indent=self.old_indent,width=self.width,allow_unicode=self.allow_unicode,line_break=self.line_break,prefix_colon=self.prefix_colon,brace_single_entry_mapping_in_flow_sequence=self.brace_single_entry_mapping_in_flow_sequence,dumper=self);setattr(self,attr,_emitter)
+-				if self.map_indent is not _A:_emitter.best_map_indent=self.map_indent
+-				if self.sequence_indent is not _A:_emitter.best_sequence_indent=self.sequence_indent
+-				if self.sequence_dash_offset is not _A:_emitter.sequence_dash_offset=self.sequence_dash_offset
+-				if self.compact_seq_seq is not _A:_emitter.compact_seq_seq=self.compact_seq_seq
+-				if self.compact_seq_map is not _A:_emitter.compact_seq_map=self.compact_seq_map
+-			else:
+-				if getattr(self,_L,_A)is _A:return _A
+-				return _A
+-		return getattr(self,attr)
+-	@property
+-	def serializer(self):
+-		attr=_D+sys._getframe().f_code.co_name
+-		if not hasattr(self,attr):setattr(self,attr,self.Serializer(encoding=self.encoding,explicit_start=self.explicit_start,explicit_end=self.explicit_end,version=self.version,tags=self.tags,dumper=self))
+-		return getattr(self,attr)
+-	@property
+-	def representer(self):
+-		attr=_D+sys._getframe().f_code.co_name
+-		if not hasattr(self,attr):
+-			repres=self.Representer(default_style=self.default_style,default_flow_style=self.default_flow_style,dumper=self)
+-			if self.sort_base_mapping_type_on_output is not _A:repres.sort_base_mapping_type_on_output=self.sort_base_mapping_type_on_output
+-			setattr(self,attr,repres)
+-		return getattr(self,attr)
+-	def load(self,stream):
+-		if not hasattr(stream,_M)and hasattr(stream,_F):
+-			with stream.open('rb')as fp:return self.load(fp)
+-		constructor,parser=self.get_constructor_parser(stream)
+-		try:return constructor.get_single_data()
+-		finally:
+-			parser.dispose()
+-			try:self._reader.reset_reader()
+-			except AttributeError:pass
+-			try:self._scanner.reset_scanner()
+-			except AttributeError:pass
+-	def load_all(self,stream,_kw=enforce):
+-		if _kw is not enforce:raise TypeError(_H.format(self.__class__.__name__,_kw))
+-		if not hasattr(stream,_M)and hasattr(stream,_F):
+-			with stream.open('r')as fp:
+-				for d in self.load_all(fp,_kw=enforce):yield d
+-				return
+-		constructor,parser=self.get_constructor_parser(stream)
+-		try:
+-			while constructor.check_data():yield constructor.get_data()
+-		finally:
+-			parser.dispose()
+-			try:self._reader.reset_reader()
+-			except AttributeError:pass
+-			try:self._scanner.reset_scanner()
+-			except AttributeError:pass
+-	def get_constructor_parser(self,stream):
+-		if self.Parser is not CParser:
+-			if self.Reader is _A:self.Reader=ruamel.yaml.reader.Reader
+-			if self.Scanner is _A:self.Scanner=ruamel.yaml.scanner.Scanner
+-			self.reader.stream=stream
+-		elif self.Reader is not _A:
+-			if self.Scanner is _A:self.Scanner=ruamel.yaml.scanner.Scanner
+-			self.Parser=ruamel.yaml.parser.Parser;self.reader.stream=stream
+-		elif self.Scanner is not _A:
+-			if self.Reader is _A:self.Reader=ruamel.yaml.reader.Reader
+-			self.Parser=ruamel.yaml.parser.Parser;self.reader.stream=stream
+-		else:
+-			rslvr=self.Resolver
+-			class XLoader(self.Parser,self.Constructor,rslvr):
+-				def __init__(selfx,stream,version=self.version,preserve_quotes=_A):CParser.__init__(selfx,stream);selfx._parser=selfx._composer=selfx;self.Constructor.__init__(selfx,loader=selfx);selfx.allow_duplicate_keys=self.allow_duplicate_keys;rslvr.__init__(selfx,version=version,loadumper=selfx)
+-			self._stream=stream;loader=XLoader(stream);return loader,loader
+-		return self.constructor,self.parser
+-	def dump(self,data,stream=_A,_kw=enforce,transform=_A):
+-		if self._context_manager:
+-			if not self._output:raise TypeError('Missing output stream while dumping from context manager')
+-			if _kw is not enforce:raise TypeError('{}.dump() takes one positional argument but at least two were given ({!r})'.format(self.__class__.__name__,_kw))
+-			if transform is not _A:raise TypeError('{}.dump() in the context manager cannot have transform keyword '.format(self.__class__.__name__))
+-			self._context_manager.dump(data)
+-		else:
+-			if stream is _A:raise TypeError('Need a stream argument when not dumping from context manager')
+-			return self.dump_all([data],stream,_kw,transform=transform)
+-	def dump_all(self,documents,stream,_kw=enforce,transform=_A):
+-		if self._context_manager:raise NotImplementedError
+-		if _kw is not enforce:raise TypeError(_N.format(self.__class__.__name__,_kw))
+-		self._output=stream;self._context_manager=YAMLContextManager(self,transform=transform)
+-		for data in documents:self._context_manager.dump(data)
+-		self._context_manager.teardown_output();self._output=_A;self._context_manager=_A
+-	def Xdump_all(self,documents,stream,_kw=enforce,transform=_A):
+-		if not hasattr(stream,_O)and hasattr(stream,_F):
+-			with stream.open('w')as fp:return self.dump_all(documents,fp,_kw,transform=transform)
+-		if _kw is not enforce:raise TypeError(_N.format(self.__class__.__name__,_kw))
+-		if self.top_level_colon_align is _C:tlca=max([len(str(x))for x in documents[0]])
+-		else:tlca=self.top_level_colon_align
+-		if transform is not _A:
+-			fstream=stream
+-			if self.encoding is _A:stream=StringIO()
+-			else:stream=BytesIO()
+-		serializer,representer,emitter=self.get_serializer_representer_emitter(stream,tlca)
+-		try:
+-			self.serializer.open()
+-			for data in documents:
+-				try:self.representer.represent(data)
+-				except AttributeError:raise
+-			self.serializer.close()
+-		finally:
+-			try:self.emitter.dispose()
+-			except AttributeError:raise
+-			delattr(self,_P);delattr(self,_Q)
+-		if transform:
+-			val=stream.getvalue()
+-			if self.encoding:val=val.decode(self.encoding)
+-			if fstream is _A:transform(val)
+-			else:fstream.write(transform(val))
+-		return _A
+-	def get_serializer_representer_emitter(self,stream,tlca):
+-		if self.Emitter is not CEmitter:
+-			if self.Serializer is _A:self.Serializer=ruamel.yaml.serializer.Serializer
+-			self.emitter.stream=stream;self.emitter.top_level_colon_align=tlca
+-			if self.scalar_after_indicator is not _A:self.emitter.scalar_after_indicator=self.scalar_after_indicator
+-			return self.serializer,self.representer,self.emitter
+-		if self.Serializer is not _A:
+-			self.Emitter=ruamel.yaml.emitter.Emitter;self.emitter.stream=stream;self.emitter.top_level_colon_align=tlca
+-			if self.scalar_after_indicator is not _A:self.emitter.scalar_after_indicator=self.scalar_after_indicator
+-			return self.serializer,self.representer,self.emitter
+-		rslvr=ruamel.yaml.resolver.BaseResolver if _I in self.typ else ruamel.yaml.resolver.Resolver
+-		class XDumper(CEmitter,self.Representer,rslvr):
+-			def __init__(selfx,stream,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=_A,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):CEmitter.__init__(selfx,stream,canonical=canonical,indent=indent,width=width,encoding=encoding,allow_unicode=allow_unicode,line_break=line_break,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags);selfx._emitter=selfx._serializer=selfx._representer=selfx;self.Representer.__init__(selfx,default_style=default_style,default_flow_style=default_flow_style);rslvr.__init__(selfx)
+-		self._stream=stream;dumper=XDumper(stream,default_style=self.default_style,default_flow_style=self.default_flow_style,canonical=self.canonical,indent=self.old_indent,width=self.width,allow_unicode=self.allow_unicode,line_break=self.line_break,explicit_start=self.explicit_start,explicit_end=self.explicit_end,version=self.version,tags=self.tags);self._emitter=self._serializer=dumper;return dumper,dumper,dumper
+-	def map(self,**kw):
+-		if _E in self.typ:from dynaconf.vendor.ruamel.yaml.comments import CommentedMap;return CommentedMap(**kw)
+-		else:return dict(**kw)
+-	def seq(self,*args):
+-		if _E in self.typ:from dynaconf.vendor.ruamel.yaml.comments import CommentedSeq;return CommentedSeq(*args)
+-		else:return list(*args)
+-	def official_plug_ins(self):bd=os.path.dirname(__file__);gpbd=os.path.dirname(os.path.dirname(bd));res=[x.replace(gpbd,'')[1:-3]for x in glob.glob(bd+'/*/__plug_in__.py')];return res
+-	def register_class(self,cls):
+-		tag=getattr(cls,_G,'!'+cls.__name__)
+-		try:self.representer.add_representer(cls,cls.to_yaml)
+-		except AttributeError:
+-			def t_y(representer,data):return representer.represent_yaml_object(tag,data,cls,flow_style=representer.default_flow_style)
+-			self.representer.add_representer(cls,t_y)
+-		try:self.constructor.add_constructor(tag,cls.from_yaml)
+-		except AttributeError:
+-			def f_y(constructor,node):return constructor.construct_yaml_object(node,cls)
+-			self.constructor.add_constructor(tag,f_y)
+-		return cls
+-	def parse(self,stream):
+-		_,parser=self.get_constructor_parser(stream)
+-		try:
+-			while parser.check_event():yield parser.get_event()
+-		finally:
+-			parser.dispose()
+-			try:self._reader.reset_reader()
+-			except AttributeError:pass
+-			try:self._scanner.reset_scanner()
+-			except AttributeError:pass
+-	def __enter__(self):self._context_manager=YAMLContextManager(self);return self
+-	def __exit__(self,typ,value,traceback):
+-		if typ:nprint(_K,typ)
+-		self._context_manager.teardown_output();self._context_manager=_A
+-	def _indent(self,mapping=_A,sequence=_A,offset=_A):
+-		if mapping is not _A:self.map_indent=mapping
+-		if sequence is not _A:self.sequence_indent=sequence
+-		if offset is not _A:self.sequence_dash_offset=offset
+-	@property
+-	def indent(self):return self._indent
+-	@indent.setter
+-	def indent(self,val):self.old_indent=val
+-	@property
+-	def block_seq_indent(self):return self.sequence_dash_offset
+-	@block_seq_indent.setter
+-	def block_seq_indent(self,val):self.sequence_dash_offset=val
+-	def compact(self,seq_seq=_A,seq_map=_A):self.compact_seq_seq=seq_seq;self.compact_seq_map=seq_map
+-class YAMLContextManager:
+-	def __init__(self,yaml,transform=_A):
+-		self._yaml=yaml;self._output_inited=_B;self._output_path=_A;self._output=self._yaml._output;self._transform=transform
+-		if not hasattr(self._output,_O)and hasattr(self._output,_F):self._output_path=self._output;self._output=self._output_path.open('w')
+-		if self._transform is not _A:
+-			self._fstream=self._output
+-			if self._yaml.encoding is _A:self._output=StringIO()
+-			else:self._output=BytesIO()
+-	def teardown_output(self):
+-		if self._output_inited:self._yaml.serializer.close()
+-		else:return
+-		try:self._yaml.emitter.dispose()
+-		except AttributeError:raise
+-		try:delattr(self._yaml,_P);delattr(self._yaml,_Q)
+-		except AttributeError:raise
+-		if self._transform:
+-			val=self._output.getvalue()
+-			if self._yaml.encoding:val=val.decode(self._yaml.encoding)
+-			if self._fstream is _A:self._transform(val)
+-			else:self._fstream.write(self._transform(val));self._fstream.flush();self._output=self._fstream
+-		if self._output_path is not _A:self._output.close()
+-	def init_output(self,first_data):
+-		if self._yaml.top_level_colon_align is _C:tlca=max([len(str(x))for x in first_data])
+-		else:tlca=self._yaml.top_level_colon_align
+-		self._yaml.get_serializer_representer_emitter(self._output,tlca);self._yaml.serializer.open();self._output_inited=_C
+-	def dump(self,data):
+-		if not self._output_inited:self.init_output(data)
+-		try:self._yaml.representer.represent(data)
+-		except AttributeError:raise
+-def yaml_object(yml):
+-	def yo_deco(cls):
+-		tag=getattr(cls,_G,'!'+cls.__name__)
+-		try:yml.representer.add_representer(cls,cls.to_yaml)
+-		except AttributeError:
+-			def t_y(representer,data):return representer.represent_yaml_object(tag,data,cls,flow_style=representer.default_flow_style)
+-			yml.representer.add_representer(cls,t_y)
+-		try:yml.constructor.add_constructor(tag,cls.from_yaml)
+-		except AttributeError:
+-			def f_y(constructor,node):return constructor.construct_yaml_object(node,cls)
+-			yml.constructor.add_constructor(tag,f_y)
+-		return cls
+-	return yo_deco
+-def scan(stream,Loader=Loader):
+-	loader=Loader(stream)
+-	try:
+-		while loader.scanner.check_token():yield loader.scanner.get_token()
+-	finally:loader._parser.dispose()
+-def parse(stream,Loader=Loader):
+-	loader=Loader(stream)
+-	try:
+-		while loader._parser.check_event():yield loader._parser.get_event()
+-	finally:loader._parser.dispose()
+-def compose(stream,Loader=Loader):
+-	loader=Loader(stream)
+-	try:return loader.get_single_node()
+-	finally:loader.dispose()
+-def compose_all(stream,Loader=Loader):
+-	loader=Loader(stream)
+-	try:
+-		while loader.check_node():yield loader._composer.get_node()
+-	finally:loader._parser.dispose()
+-def load(stream,Loader=_A,version=_A,preserve_quotes=_A):
+-	if Loader is _A:warnings.warn(UnsafeLoaderWarning.text,UnsafeLoaderWarning,stacklevel=2);Loader=UnsafeLoader
+-	loader=Loader(stream,version,preserve_quotes=preserve_quotes)
+-	try:return loader._constructor.get_single_data()
+-	finally:
+-		loader._parser.dispose()
+-		try:loader._reader.reset_reader()
+-		except AttributeError:pass
+-		try:loader._scanner.reset_scanner()
+-		except AttributeError:pass
+-def load_all(stream,Loader=_A,version=_A,preserve_quotes=_A):
+-	if Loader is _A:warnings.warn(UnsafeLoaderWarning.text,UnsafeLoaderWarning,stacklevel=2);Loader=UnsafeLoader
+-	loader=Loader(stream,version,preserve_quotes=preserve_quotes)
+-	try:
+-		while loader._constructor.check_data():yield loader._constructor.get_data()
+-	finally:
+-		loader._parser.dispose()
+-		try:loader._reader.reset_reader()
+-		except AttributeError:pass
+-		try:loader._scanner.reset_scanner()
+-		except AttributeError:pass
+-def safe_load(stream,version=_A):return load(stream,SafeLoader,version)
+-def safe_load_all(stream,version=_A):return load_all(stream,SafeLoader,version)
+-def round_trip_load(stream,version=_A,preserve_quotes=_A):return load(stream,RoundTripLoader,version,preserve_quotes=preserve_quotes)
+-def round_trip_load_all(stream,version=_A,preserve_quotes=_A):return load_all(stream,RoundTripLoader,version,preserve_quotes=preserve_quotes)
+-def emit(events,stream=_A,Dumper=Dumper,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A):
+-	getvalue=_A
+-	if stream is _A:stream=StringIO();getvalue=stream.getvalue
+-	dumper=Dumper(stream,canonical=canonical,indent=indent,width=width,allow_unicode=allow_unicode,line_break=line_break)
+-	try:
+-		for event in events:dumper.emit(event)
+-	finally:
+-		try:dumper._emitter.dispose()
+-		except AttributeError:raise;dumper.dispose()
+-	if getvalue is not _A:return getvalue()
+-enc=_A if PY3 else _J
+-def serialize_all(nodes,stream=_A,Dumper=Dumper,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=enc,explicit_start=_A,explicit_end=_A,version=_A,tags=_A):
+-	getvalue=_A
+-	if stream is _A:
+-		if encoding is _A:stream=StringIO()
+-		else:stream=BytesIO()
+-		getvalue=stream.getvalue
+-	dumper=Dumper(stream,canonical=canonical,indent=indent,width=width,allow_unicode=allow_unicode,line_break=line_break,encoding=encoding,version=version,tags=tags,explicit_start=explicit_start,explicit_end=explicit_end)
+-	try:
+-		dumper._serializer.open()
+-		for node in nodes:dumper.serialize(node)
+-		dumper._serializer.close()
+-	finally:
+-		try:dumper._emitter.dispose()
+-		except AttributeError:raise;dumper.dispose()
+-	if getvalue is not _A:return getvalue()
+-def serialize(node,stream=_A,Dumper=Dumper,**kwds):return serialize_all([node],stream,Dumper=Dumper,**kwds)
+-def dump_all(documents,stream=_A,Dumper=Dumper,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=enc,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):
+-	getvalue=_A
+-	if top_level_colon_align is _C:top_level_colon_align=max([len(str(x))for x in documents[0]])
+-	if stream is _A:
+-		if encoding is _A:stream=StringIO()
+-		else:stream=BytesIO()
+-		getvalue=stream.getvalue
+-	dumper=Dumper(stream,default_style=default_style,default_flow_style=default_flow_style,canonical=canonical,indent=indent,width=width,allow_unicode=allow_unicode,line_break=line_break,encoding=encoding,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags,block_seq_indent=block_seq_indent,top_level_colon_align=top_level_colon_align,prefix_colon=prefix_colon)
+-	try:
+-		dumper._serializer.open()
+-		for data in documents:
+-			try:dumper._representer.represent(data)
+-			except AttributeError:raise
+-		dumper._serializer.close()
+-	finally:
+-		try:dumper._emitter.dispose()
+-		except AttributeError:raise;dumper.dispose()
+-	if getvalue is not _A:return getvalue()
+-	return _A
+-def dump(data,stream=_A,Dumper=Dumper,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=enc,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A):return dump_all([data],stream,Dumper=Dumper,default_style=default_style,default_flow_style=default_flow_style,canonical=canonical,indent=indent,width=width,allow_unicode=allow_unicode,line_break=line_break,encoding=encoding,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags,block_seq_indent=block_seq_indent)
+-def safe_dump_all(documents,stream=_A,**kwds):return dump_all(documents,stream,Dumper=SafeDumper,**kwds)
+-def safe_dump(data,stream=_A,**kwds):return dump_all([data],stream,Dumper=SafeDumper,**kwds)
+-def round_trip_dump(data,stream=_A,Dumper=RoundTripDumper,default_style=_A,default_flow_style=_A,canonical=_A,indent=_A,width=_A,allow_unicode=_A,line_break=_A,encoding=enc,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,block_seq_indent=_A,top_level_colon_align=_A,prefix_colon=_A):allow_unicode=_C if allow_unicode is _A else allow_unicode;return dump_all([data],stream,Dumper=Dumper,default_style=default_style,default_flow_style=default_flow_style,canonical=canonical,indent=indent,width=width,allow_unicode=allow_unicode,line_break=line_break,encoding=encoding,explicit_start=explicit_start,explicit_end=explicit_end,version=version,tags=tags,block_seq_indent=block_seq_indent,top_level_colon_align=top_level_colon_align,prefix_colon=prefix_colon)
+-def add_implicit_resolver(tag,regexp,first=_A,Loader=_A,Dumper=_A,resolver=Resolver):
+-	A='add_implicit_resolver'
+-	if Loader is _A and Dumper is _A:resolver.add_implicit_resolver(tag,regexp,first);return
+-	if Loader:
+-		if hasattr(Loader,A):Loader.add_implicit_resolver(tag,regexp,first)
+-		elif issubclass(Loader,(BaseLoader,SafeLoader,ruamel.yaml.loader.Loader,RoundTripLoader)):Resolver.add_implicit_resolver(tag,regexp,first)
+-		else:raise NotImplementedError
+-	if Dumper:
+-		if hasattr(Dumper,A):Dumper.add_implicit_resolver(tag,regexp,first)
+-		elif issubclass(Dumper,(BaseDumper,SafeDumper,ruamel.yaml.dumper.Dumper,RoundTripDumper)):Resolver.add_implicit_resolver(tag,regexp,first)
+-		else:raise NotImplementedError
+-def add_path_resolver(tag,path,kind=_A,Loader=_A,Dumper=_A,resolver=Resolver):
+-	A='add_path_resolver'
+-	if Loader is _A and Dumper is _A:resolver.add_path_resolver(tag,path,kind);return
+-	if Loader:
+-		if hasattr(Loader,A):Loader.add_path_resolver(tag,path,kind)
+-		elif issubclass(Loader,(BaseLoader,SafeLoader,ruamel.yaml.loader.Loader,RoundTripLoader)):Resolver.add_path_resolver(tag,path,kind)
+-		else:raise NotImplementedError
+-	if Dumper:
+-		if hasattr(Dumper,A):Dumper.add_path_resolver(tag,path,kind)
+-		elif issubclass(Dumper,(BaseDumper,SafeDumper,ruamel.yaml.dumper.Dumper,RoundTripDumper)):Resolver.add_path_resolver(tag,path,kind)
+-		else:raise NotImplementedError
+-def add_constructor(tag,object_constructor,Loader=_A,constructor=Constructor):
+-	if Loader is _A:constructor.add_constructor(tag,object_constructor)
+-	else:
+-		if hasattr(Loader,'add_constructor'):Loader.add_constructor(tag,object_constructor);return
+-		if issubclass(Loader,BaseLoader):BaseConstructor.add_constructor(tag,object_constructor)
+-		elif issubclass(Loader,SafeLoader):SafeConstructor.add_constructor(tag,object_constructor)
+-		elif issubclass(Loader,Loader):Constructor.add_constructor(tag,object_constructor)
+-		elif issubclass(Loader,RoundTripLoader):RoundTripConstructor.add_constructor(tag,object_constructor)
+-		else:raise NotImplementedError
+-def add_multi_constructor(tag_prefix,multi_constructor,Loader=_A,constructor=Constructor):
+-	if Loader is _A:constructor.add_multi_constructor(tag_prefix,multi_constructor)
+-	else:
+-		if _B and hasattr(Loader,'add_multi_constructor'):Loader.add_multi_constructor(tag_prefix,constructor);return
+-		if issubclass(Loader,BaseLoader):BaseConstructor.add_multi_constructor(tag_prefix,multi_constructor)
+-		elif issubclass(Loader,SafeLoader):SafeConstructor.add_multi_constructor(tag_prefix,multi_constructor)
+-		elif issubclass(Loader,ruamel.yaml.loader.Loader):Constructor.add_multi_constructor(tag_prefix,multi_constructor)
+-		elif issubclass(Loader,RoundTripLoader):RoundTripConstructor.add_multi_constructor(tag_prefix,multi_constructor)
+-		else:raise NotImplementedError
+-def add_representer(data_type,object_representer,Dumper=_A,representer=Representer):
+-	if Dumper is _A:representer.add_representer(data_type,object_representer)
+-	else:
+-		if hasattr(Dumper,'add_representer'):Dumper.add_representer(data_type,object_representer);return
+-		if issubclass(Dumper,BaseDumper):BaseRepresenter.add_representer(data_type,object_representer)
+-		elif issubclass(Dumper,SafeDumper):SafeRepresenter.add_representer(data_type,object_representer)
+-		elif issubclass(Dumper,Dumper):Representer.add_representer(data_type,object_representer)
+-		elif issubclass(Dumper,RoundTripDumper):RoundTripRepresenter.add_representer(data_type,object_representer)
+-		else:raise NotImplementedError
+-def add_multi_representer(data_type,multi_representer,Dumper=_A,representer=Representer):
+-	if Dumper is _A:representer.add_multi_representer(data_type,multi_representer)
+-	else:
+-		if hasattr(Dumper,'add_multi_representer'):Dumper.add_multi_representer(data_type,multi_representer);return
+-		if issubclass(Dumper,BaseDumper):BaseRepresenter.add_multi_representer(data_type,multi_representer)
+-		elif issubclass(Dumper,SafeDumper):SafeRepresenter.add_multi_representer(data_type,multi_representer)
+-		elif issubclass(Dumper,Dumper):Representer.add_multi_representer(data_type,multi_representer)
+-		elif issubclass(Dumper,RoundTripDumper):RoundTripRepresenter.add_multi_representer(data_type,multi_representer)
+-		else:raise NotImplementedError
+-class YAMLObjectMetaclass(type):
+-	def __init__(cls,name,bases,kwds):
+-		super(YAMLObjectMetaclass,cls).__init__(name,bases,kwds)
+-		if _G in kwds and kwds[_G]is not _A:cls.yaml_constructor.add_constructor(cls.yaml_tag,cls.from_yaml);cls.yaml_representer.add_representer(cls,cls.to_yaml)
+-class YAMLObject(with_metaclass(YAMLObjectMetaclass)):
+-	__slots__=();yaml_constructor=Constructor;yaml_representer=Representer;yaml_tag=_A;yaml_flow_style=_A
+-	@classmethod
+-	def from_yaml(cls,constructor,node):return constructor.construct_yaml_object(node,cls)
+-	@classmethod
+-	def to_yaml(cls,representer,data):return representer.represent_yaml_object(cls.yaml_tag,data,cls,flow_style=cls.yaml_flow_style)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/nodes.py b/dynaconf/vendor/ruamel/yaml/nodes.py
+deleted file mode 100644
+index ffbd8cb..0000000
+--- a/dynaconf/vendor/ruamel/yaml/nodes.py
++++ /dev/null
+@@ -1,32 +0,0 @@
+-from __future__ import print_function
+-_A=None
+-import sys
+-from .compat import string_types
+-if False:from typing import Dict,Any,Text
+-class Node:
+-	__slots__='tag','value','start_mark','end_mark','comment','anchor'
+-	def __init__(A,tag,value,start_mark,end_mark,comment=_A,anchor=_A):A.tag=tag;A.value=value;A.start_mark=start_mark;A.end_mark=end_mark;A.comment=comment;A.anchor=anchor
+-	def __repr__(A):B=A.value;B=repr(B);return'%s(tag=%r, value=%s)'%(A.__class__.__name__,A.tag,B)
+-	def dump(A,indent=0):
+-		F='    {}comment: {})\n';D='  ';B=indent
+-		if isinstance(A.value,string_types):
+-			sys.stdout.write('{}{}(tag={!r}, value={!r})\n'.format(D*B,A.__class__.__name__,A.tag,A.value))
+-			if A.comment:sys.stdout.write(F.format(D*B,A.comment))
+-			return
+-		sys.stdout.write('{}{}(tag={!r})\n'.format(D*B,A.__class__.__name__,A.tag))
+-		if A.comment:sys.stdout.write(F.format(D*B,A.comment))
+-		for C in A.value:
+-			if isinstance(C,tuple):
+-				for E in C:E.dump(B+1)
+-			elif isinstance(C,Node):C.dump(B+1)
+-			else:sys.stdout.write('Node value type? {}\n'.format(type(C)))
+-class ScalarNode(Node):
+-	__slots__='style',;id='scalar'
+-	def __init__(A,tag,value,start_mark=_A,end_mark=_A,style=_A,comment=_A,anchor=_A):Node.__init__(A,tag,value,start_mark,end_mark,comment=comment,anchor=anchor);A.style=style
+-class CollectionNode(Node):
+-	__slots__='flow_style',
+-	def __init__(A,tag,value,start_mark=_A,end_mark=_A,flow_style=_A,comment=_A,anchor=_A):Node.__init__(A,tag,value,start_mark,end_mark,comment=comment);A.flow_style=flow_style;A.anchor=anchor
+-class SequenceNode(CollectionNode):__slots__=();id='sequence'
+-class MappingNode(CollectionNode):
+-	__slots__='merge',;id='mapping'
+-	def __init__(A,tag,value,start_mark=_A,end_mark=_A,flow_style=_A,comment=_A,anchor=_A):CollectionNode.__init__(A,tag,value,start_mark,end_mark,flow_style,comment,anchor);A.merge=_A
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/parser.py b/dynaconf/vendor/ruamel/yaml/parser.py
+deleted file mode 100644
+index 2fc791c..0000000
+--- a/dynaconf/vendor/ruamel/yaml/parser.py
++++ /dev/null
+@@ -1,216 +0,0 @@
+-from __future__ import absolute_import
+-_F='expected <block end>, but found %r'
+-_E='typ'
+-_D='!'
+-_C=True
+-_B=False
+-_A=None
+-from .error import MarkedYAMLError
+-from .tokens import *
+-from .events import *
+-from .scanner import Scanner,RoundTripScanner,ScannerError
+-from .compat import utf8,nprint,nprintf
+-if _B:from typing import Any,Dict,Optional,List
+-__all__=['Parser','RoundTripParser','ParserError']
+-class ParserError(MarkedYAMLError):0
+-class Parser:
+-	DEFAULT_TAGS={_D:_D,'!!':'tag:yaml.org,2002:'}
+-	def __init__(self,loader):
+-		self.loader=loader
+-		if self.loader is not _A and getattr(self.loader,'_parser',_A)is _A:self.loader._parser=self
+-		self.reset_parser()
+-	def reset_parser(self):self.current_event=_A;self.tag_handles={};self.states=[];self.marks=[];self.state=self.parse_stream_start
+-	def dispose(self):self.reset_parser()
+-	@property
+-	def scanner(self):
+-		if hasattr(self.loader,_E):return self.loader.scanner
+-		return self.loader._scanner
+-	@property
+-	def resolver(self):
+-		if hasattr(self.loader,_E):return self.loader.resolver
+-		return self.loader._resolver
+-	def check_event(self,*choices):
+-		if self.current_event is _A:
+-			if self.state:self.current_event=self.state()
+-		if self.current_event is not _A:
+-			if not choices:return _C
+-			for choice in choices:
+-				if isinstance(self.current_event,choice):return _C
+-		return _B
+-	def peek_event(self):
+-		if self.current_event is _A:
+-			if self.state:self.current_event=self.state()
+-		return self.current_event
+-	def get_event(self):
+-		if self.current_event is _A:
+-			if self.state:self.current_event=self.state()
+-		value=self.current_event;self.current_event=_A;return value
+-	def parse_stream_start(self):token=self.scanner.get_token();token.move_comment(self.scanner.peek_token());event=StreamStartEvent(token.start_mark,token.end_mark,encoding=token.encoding);self.state=self.parse_implicit_document_start;return event
+-	def parse_implicit_document_start(self):
+-		if not self.scanner.check_token(DirectiveToken,DocumentStartToken,StreamEndToken):self.tag_handles=self.DEFAULT_TAGS;token=self.scanner.peek_token();start_mark=end_mark=token.start_mark;event=DocumentStartEvent(start_mark,end_mark,explicit=_B);self.states.append(self.parse_document_end);self.state=self.parse_block_node;return event
+-		else:return self.parse_document_start()
+-	def parse_document_start(self):
+-		while self.scanner.check_token(DocumentEndToken):self.scanner.get_token()
+-		if not self.scanner.check_token(StreamEndToken):
+-			token=self.scanner.peek_token();start_mark=token.start_mark;version,tags=self.process_directives()
+-			if not self.scanner.check_token(DocumentStartToken):raise ParserError(_A,_A,"expected '<document start>', but found %r"%self.scanner.peek_token().id,self.scanner.peek_token().start_mark)
+-			token=self.scanner.get_token();end_mark=token.end_mark;event=DocumentStartEvent(start_mark,end_mark,explicit=_C,version=version,tags=tags);self.states.append(self.parse_document_end);self.state=self.parse_document_content
+-		else:token=self.scanner.get_token();event=StreamEndEvent(token.start_mark,token.end_mark,comment=token.comment);assert not self.states;assert not self.marks;self.state=_A
+-		return event
+-	def parse_document_end(self):
+-		token=self.scanner.peek_token();start_mark=end_mark=token.start_mark;explicit=_B
+-		if self.scanner.check_token(DocumentEndToken):token=self.scanner.get_token();end_mark=token.end_mark;explicit=_C
+-		event=DocumentEndEvent(start_mark,end_mark,explicit=explicit)
+-		if self.resolver.processing_version==(1,1):self.state=self.parse_document_start
+-		else:self.state=self.parse_implicit_document_start
+-		return event
+-	def parse_document_content(self):
+-		if self.scanner.check_token(DirectiveToken,DocumentStartToken,DocumentEndToken,StreamEndToken):event=self.process_empty_scalar(self.scanner.peek_token().start_mark);self.state=self.states.pop();return event
+-		else:return self.parse_block_node()
+-	def process_directives(self):
+-		yaml_version=_A;self.tag_handles={}
+-		while self.scanner.check_token(DirectiveToken):
+-			token=self.scanner.get_token()
+-			if token.name=='YAML':
+-				if yaml_version is not _A:raise ParserError(_A,_A,'found duplicate YAML directive',token.start_mark)
+-				major,minor=token.value
+-				if major!=1:raise ParserError(_A,_A,'found incompatible YAML document (version 1.* is required)',token.start_mark)
+-				yaml_version=token.value
+-			elif token.name=='TAG':
+-				handle,prefix=token.value
+-				if handle in self.tag_handles:raise ParserError(_A,_A,'duplicate tag handle %r'%utf8(handle),token.start_mark)
+-				self.tag_handles[handle]=prefix
+-		if bool(self.tag_handles):value=yaml_version,self.tag_handles.copy()
+-		else:value=yaml_version,_A
+-		if self.loader is not _A and hasattr(self.loader,'tags'):
+-			self.loader.version=yaml_version
+-			if self.loader.tags is _A:self.loader.tags={}
+-			for k in self.tag_handles:self.loader.tags[k]=self.tag_handles[k]
+-		for key in self.DEFAULT_TAGS:
+-			if key not in self.tag_handles:self.tag_handles[key]=self.DEFAULT_TAGS[key]
+-		return value
+-	def parse_block_node(self):return self.parse_node(block=_C)
+-	def parse_flow_node(self):return self.parse_node()
+-	def parse_block_node_or_indentless_sequence(self):return self.parse_node(block=_C,indentless_sequence=_C)
+-	def transform_tag(self,handle,suffix):return self.tag_handles[handle]+suffix
+-	def parse_node(self,block=_B,indentless_sequence=_B):
+-		if self.scanner.check_token(AliasToken):token=self.scanner.get_token();event=AliasEvent(token.value,token.start_mark,token.end_mark);self.state=self.states.pop();return event
+-		anchor=_A;tag=_A;start_mark=end_mark=tag_mark=_A
+-		if self.scanner.check_token(AnchorToken):
+-			token=self.scanner.get_token();start_mark=token.start_mark;end_mark=token.end_mark;anchor=token.value
+-			if self.scanner.check_token(TagToken):token=self.scanner.get_token();tag_mark=token.start_mark;end_mark=token.end_mark;tag=token.value
+-		elif self.scanner.check_token(TagToken):
+-			token=self.scanner.get_token();start_mark=tag_mark=token.start_mark;end_mark=token.end_mark;tag=token.value
+-			if self.scanner.check_token(AnchorToken):token=self.scanner.get_token();start_mark=tag_mark=token.start_mark;end_mark=token.end_mark;anchor=token.value
+-		if tag is not _A:
+-			handle,suffix=tag
+-			if handle is not _A:
+-				if handle not in self.tag_handles:raise ParserError('while parsing a node',start_mark,'found undefined tag handle %r'%utf8(handle),tag_mark)
+-				tag=self.transform_tag(handle,suffix)
+-			else:tag=suffix
+-		if start_mark is _A:start_mark=end_mark=self.scanner.peek_token().start_mark
+-		event=_A;implicit=tag is _A or tag==_D
+-		if indentless_sequence and self.scanner.check_token(BlockEntryToken):
+-			comment=_A;pt=self.scanner.peek_token()
+-			if pt.comment and pt.comment[0]:comment=[pt.comment[0],[]];pt.comment[0]=_A
+-			end_mark=self.scanner.peek_token().end_mark;event=SequenceStartEvent(anchor,tag,implicit,start_mark,end_mark,flow_style=_B,comment=comment);self.state=self.parse_indentless_sequence_entry;return event
+-		if self.scanner.check_token(ScalarToken):
+-			token=self.scanner.get_token();end_mark=token.end_mark
+-			if token.plain and tag is _A or tag==_D:implicit=_C,_B
+-			elif tag is _A:implicit=_B,_C
+-			else:implicit=_B,_B
+-			event=ScalarEvent(anchor,tag,implicit,token.value,start_mark,end_mark,style=token.style,comment=token.comment);self.state=self.states.pop()
+-		elif self.scanner.check_token(FlowSequenceStartToken):pt=self.scanner.peek_token();end_mark=pt.end_mark;event=SequenceStartEvent(anchor,tag,implicit,start_mark,end_mark,flow_style=_C,comment=pt.comment);self.state=self.parse_flow_sequence_first_entry
+-		elif self.scanner.check_token(FlowMappingStartToken):pt=self.scanner.peek_token();end_mark=pt.end_mark;event=MappingStartEvent(anchor,tag,implicit,start_mark,end_mark,flow_style=_C,comment=pt.comment);self.state=self.parse_flow_mapping_first_key
+-		elif block and self.scanner.check_token(BlockSequenceStartToken):
+-			end_mark=self.scanner.peek_token().start_mark;pt=self.scanner.peek_token();comment=pt.comment
+-			if comment is _A or comment[1]is _A:comment=pt.split_comment()
+-			event=SequenceStartEvent(anchor,tag,implicit,start_mark,end_mark,flow_style=_B,comment=comment);self.state=self.parse_block_sequence_first_entry
+-		elif block and self.scanner.check_token(BlockMappingStartToken):end_mark=self.scanner.peek_token().start_mark;comment=self.scanner.peek_token().comment;event=MappingStartEvent(anchor,tag,implicit,start_mark,end_mark,flow_style=_B,comment=comment);self.state=self.parse_block_mapping_first_key
+-		elif anchor is not _A or tag is not _A:event=ScalarEvent(anchor,tag,(implicit,_B),'',start_mark,end_mark);self.state=self.states.pop()
+-		else:
+-			if block:node='block'
+-			else:node='flow'
+-			token=self.scanner.peek_token();raise ParserError('while parsing a %s node'%node,start_mark,'expected the node content, but found %r'%token.id,token.start_mark)
+-		return event
+-	def parse_block_sequence_first_entry(self):token=self.scanner.get_token();self.marks.append(token.start_mark);return self.parse_block_sequence_entry()
+-	def parse_block_sequence_entry(self):
+-		if self.scanner.check_token(BlockEntryToken):
+-			token=self.scanner.get_token();token.move_comment(self.scanner.peek_token())
+-			if not self.scanner.check_token(BlockEntryToken,BlockEndToken):self.states.append(self.parse_block_sequence_entry);return self.parse_block_node()
+-			else:self.state=self.parse_block_sequence_entry;return self.process_empty_scalar(token.end_mark)
+-		if not self.scanner.check_token(BlockEndToken):token=self.scanner.peek_token();raise ParserError('while parsing a block collection',self.marks[-1],_F%token.id,token.start_mark)
+-		token=self.scanner.get_token();event=SequenceEndEvent(token.start_mark,token.end_mark,comment=token.comment);self.state=self.states.pop();self.marks.pop();return event
+-	def parse_indentless_sequence_entry(self):
+-		if self.scanner.check_token(BlockEntryToken):
+-			token=self.scanner.get_token();token.move_comment(self.scanner.peek_token())
+-			if not self.scanner.check_token(BlockEntryToken,KeyToken,ValueToken,BlockEndToken):self.states.append(self.parse_indentless_sequence_entry);return self.parse_block_node()
+-			else:self.state=self.parse_indentless_sequence_entry;return self.process_empty_scalar(token.end_mark)
+-		token=self.scanner.peek_token();event=SequenceEndEvent(token.start_mark,token.start_mark,comment=token.comment);self.state=self.states.pop();return event
+-	def parse_block_mapping_first_key(self):token=self.scanner.get_token();self.marks.append(token.start_mark);return self.parse_block_mapping_key()
+-	def parse_block_mapping_key(self):
+-		if self.scanner.check_token(KeyToken):
+-			token=self.scanner.get_token();token.move_comment(self.scanner.peek_token())
+-			if not self.scanner.check_token(KeyToken,ValueToken,BlockEndToken):self.states.append(self.parse_block_mapping_value);return self.parse_block_node_or_indentless_sequence()
+-			else:self.state=self.parse_block_mapping_value;return self.process_empty_scalar(token.end_mark)
+-		if self.resolver.processing_version>(1,1)and self.scanner.check_token(ValueToken):self.state=self.parse_block_mapping_value;return self.process_empty_scalar(self.scanner.peek_token().start_mark)
+-		if not self.scanner.check_token(BlockEndToken):token=self.scanner.peek_token();raise ParserError('while parsing a block mapping',self.marks[-1],_F%token.id,token.start_mark)
+-		token=self.scanner.get_token();token.move_comment(self.scanner.peek_token());event=MappingEndEvent(token.start_mark,token.end_mark,comment=token.comment);self.state=self.states.pop();self.marks.pop();return event
+-	def parse_block_mapping_value(self):
+-		if self.scanner.check_token(ValueToken):
+-			token=self.scanner.get_token()
+-			if self.scanner.check_token(ValueToken):token.move_comment(self.scanner.peek_token())
+-			elif not self.scanner.check_token(KeyToken):token.move_comment(self.scanner.peek_token(),empty=_C)
+-			if not self.scanner.check_token(KeyToken,ValueToken,BlockEndToken):self.states.append(self.parse_block_mapping_key);return self.parse_block_node_or_indentless_sequence()
+-			else:
+-				self.state=self.parse_block_mapping_key;comment=token.comment
+-				if comment is _A:
+-					token=self.scanner.peek_token();comment=token.comment
+-					if comment:token._comment=[_A,comment[1]];comment=[comment[0],_A]
+-				return self.process_empty_scalar(token.end_mark,comment=comment)
+-		else:self.state=self.parse_block_mapping_key;token=self.scanner.peek_token();return self.process_empty_scalar(token.start_mark)
+-	def parse_flow_sequence_first_entry(self):token=self.scanner.get_token();self.marks.append(token.start_mark);return self.parse_flow_sequence_entry(first=_C)
+-	def parse_flow_sequence_entry(self,first=_B):
+-		if not self.scanner.check_token(FlowSequenceEndToken):
+-			if not first:
+-				if self.scanner.check_token(FlowEntryToken):self.scanner.get_token()
+-				else:token=self.scanner.peek_token();raise ParserError('while parsing a flow sequence',self.marks[-1],"expected ',' or ']', but got %r"%token.id,token.start_mark)
+-			if self.scanner.check_token(KeyToken):token=self.scanner.peek_token();event=MappingStartEvent(_A,_A,_C,token.start_mark,token.end_mark,flow_style=_C);self.state=self.parse_flow_sequence_entry_mapping_key;return event
+-			elif not self.scanner.check_token(FlowSequenceEndToken):self.states.append(self.parse_flow_sequence_entry);return self.parse_flow_node()
+-		token=self.scanner.get_token();event=SequenceEndEvent(token.start_mark,token.end_mark,comment=token.comment);self.state=self.states.pop();self.marks.pop();return event
+-	def parse_flow_sequence_entry_mapping_key(self):
+-		token=self.scanner.get_token()
+-		if not self.scanner.check_token(ValueToken,FlowEntryToken,FlowSequenceEndToken):self.states.append(self.parse_flow_sequence_entry_mapping_value);return self.parse_flow_node()
+-		else:self.state=self.parse_flow_sequence_entry_mapping_value;return self.process_empty_scalar(token.end_mark)
+-	def parse_flow_sequence_entry_mapping_value(self):
+-		if self.scanner.check_token(ValueToken):
+-			token=self.scanner.get_token()
+-			if not self.scanner.check_token(FlowEntryToken,FlowSequenceEndToken):self.states.append(self.parse_flow_sequence_entry_mapping_end);return self.parse_flow_node()
+-			else:self.state=self.parse_flow_sequence_entry_mapping_end;return self.process_empty_scalar(token.end_mark)
+-		else:self.state=self.parse_flow_sequence_entry_mapping_end;token=self.scanner.peek_token();return self.process_empty_scalar(token.start_mark)
+-	def parse_flow_sequence_entry_mapping_end(self):self.state=self.parse_flow_sequence_entry;token=self.scanner.peek_token();return MappingEndEvent(token.start_mark,token.start_mark)
+-	def parse_flow_mapping_first_key(self):token=self.scanner.get_token();self.marks.append(token.start_mark);return self.parse_flow_mapping_key(first=_C)
+-	def parse_flow_mapping_key(self,first=_B):
+-		if not self.scanner.check_token(FlowMappingEndToken):
+-			if not first:
+-				if self.scanner.check_token(FlowEntryToken):self.scanner.get_token()
+-				else:token=self.scanner.peek_token();raise ParserError('while parsing a flow mapping',self.marks[-1],"expected ',' or '}', but got %r"%token.id,token.start_mark)
+-			if self.scanner.check_token(KeyToken):
+-				token=self.scanner.get_token()
+-				if not self.scanner.check_token(ValueToken,FlowEntryToken,FlowMappingEndToken):self.states.append(self.parse_flow_mapping_value);return self.parse_flow_node()
+-				else:self.state=self.parse_flow_mapping_value;return self.process_empty_scalar(token.end_mark)
+-			elif self.resolver.processing_version>(1,1)and self.scanner.check_token(ValueToken):self.state=self.parse_flow_mapping_value;return self.process_empty_scalar(self.scanner.peek_token().end_mark)
+-			elif not self.scanner.check_token(FlowMappingEndToken):self.states.append(self.parse_flow_mapping_empty_value);return self.parse_flow_node()
+-		token=self.scanner.get_token();event=MappingEndEvent(token.start_mark,token.end_mark,comment=token.comment);self.state=self.states.pop();self.marks.pop();return event
+-	def parse_flow_mapping_value(self):
+-		if self.scanner.check_token(ValueToken):
+-			token=self.scanner.get_token()
+-			if not self.scanner.check_token(FlowEntryToken,FlowMappingEndToken):self.states.append(self.parse_flow_mapping_key);return self.parse_flow_node()
+-			else:self.state=self.parse_flow_mapping_key;return self.process_empty_scalar(token.end_mark)
+-		else:self.state=self.parse_flow_mapping_key;token=self.scanner.peek_token();return self.process_empty_scalar(token.start_mark)
+-	def parse_flow_mapping_empty_value(self):self.state=self.parse_flow_mapping_key;return self.process_empty_scalar(self.scanner.peek_token().start_mark)
+-	def process_empty_scalar(self,mark,comment=_A):return ScalarEvent(_A,_A,(_C,_B),'',mark,mark,comment=comment)
+-class RoundTripParser(Parser):
+-	def transform_tag(self,handle,suffix):
+-		if handle=='!!'and suffix in('null','bool','int','float','binary','timestamp','omap','pairs','set','str','seq','map'):return Parser.transform_tag(self,handle,suffix)
+-		return handle+suffix
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/py.typed b/dynaconf/vendor/ruamel/yaml/py.typed
+deleted file mode 100644
+index e69de29..0000000
+diff --git a/dynaconf/vendor/ruamel/yaml/reader.py b/dynaconf/vendor/ruamel/yaml/reader.py
+deleted file mode 100644
+index 06bd083..0000000
+--- a/dynaconf/vendor/ruamel/yaml/reader.py
++++ /dev/null
+@@ -1,117 +0,0 @@
+-from __future__ import absolute_import
+-_F='\ufeff'
+-_E='\x00'
+-_D=False
+-_C='ascii'
+-_B='\n'
+-_A=None
+-import codecs
+-from .error import YAMLError,FileMark,StringMark,YAMLStreamError
+-from .compat import text_type,binary_type,PY3,UNICODE_SIZE
+-from .util import RegExp
+-if _D:from typing import Any,Dict,Optional,List,Union,Text,Tuple,Optional
+-__all__=['Reader','ReaderError']
+-class ReaderError(YAMLError):
+-	def __init__(A,name,position,character,encoding,reason):A.name=name;A.character=character;A.position=position;A.encoding=encoding;A.reason=reason
+-	def __str__(A):
+-		if isinstance(A.character,binary_type):return'\'%s\' codec can\'t decode byte #x%02x: %s\n  in "%s", position %d'%(A.encoding,ord(A.character),A.reason,A.name,A.position)
+-		else:return'unacceptable character #x%04x: %s\n  in "%s", position %d'%(A.character,A.reason,A.name,A.position)
+-class Reader:
+-	def __init__(A,stream,loader=_A):
+-		A.loader=loader
+-		if A.loader is not _A and getattr(A.loader,'_reader',_A)is _A:A.loader._reader=A
+-		A.reset_reader();A.stream=stream
+-	def reset_reader(A):A.name=_A;A.stream_pointer=0;A.eof=True;A.buffer='';A.pointer=0;A.raw_buffer=_A;A.raw_decode=_A;A.encoding=_A;A.index=0;A.line=0;A.column=0
+-	@property
+-	def stream(self):
+-		try:return self._stream
+-		except AttributeError:raise YAMLStreamError('input stream needs to specified')
+-	@stream.setter
+-	def stream(self,val):
+-		B=val;A=self
+-		if B is _A:return
+-		A._stream=_A
+-		if isinstance(B,text_type):A.name='<unicode string>';A.check_printable(B);A.buffer=B+_E
+-		elif isinstance(B,binary_type):A.name='<byte string>';A.raw_buffer=B;A.determine_encoding()
+-		else:
+-			if not hasattr(B,'read'):raise YAMLStreamError('stream argument needs to have a read() method')
+-			A._stream=B;A.name=getattr(A.stream,'name','<file>');A.eof=_D;A.raw_buffer=_A;A.determine_encoding()
+-	def peek(A,index=0):
+-		B=index
+-		try:return A.buffer[A.pointer+B]
+-		except IndexError:A.update(B+1);return A.buffer[A.pointer+B]
+-	def prefix(A,length=1):
+-		B=length
+-		if A.pointer+B>=len(A.buffer):A.update(B)
+-		return A.buffer[A.pointer:A.pointer+B]
+-	def forward_1_1(A,length=1):
+-		B=length
+-		if A.pointer+B+1>=len(A.buffer):A.update(B+1)
+-		while B!=0:
+-			C=A.buffer[A.pointer];A.pointer+=1;A.index+=1
+-			if C in'\n\x85\u2028\u2029'or C=='\r'and A.buffer[A.pointer]!=_B:A.line+=1;A.column=0
+-			elif C!=_F:A.column+=1
+-			B-=1
+-	def forward(A,length=1):
+-		B=length
+-		if A.pointer+B+1>=len(A.buffer):A.update(B+1)
+-		while B!=0:
+-			C=A.buffer[A.pointer];A.pointer+=1;A.index+=1
+-			if C==_B or C=='\r'and A.buffer[A.pointer]!=_B:A.line+=1;A.column=0
+-			elif C!=_F:A.column+=1
+-			B-=1
+-	def get_mark(A):
+-		if A.stream is _A:return StringMark(A.name,A.index,A.line,A.column,A.buffer,A.pointer)
+-		else:return FileMark(A.name,A.index,A.line,A.column)
+-	def determine_encoding(A):
+-		while not A.eof and(A.raw_buffer is _A or len(A.raw_buffer)<2):A.update_raw()
+-		if isinstance(A.raw_buffer,binary_type):
+-			if A.raw_buffer.startswith(codecs.BOM_UTF16_LE):A.raw_decode=codecs.utf_16_le_decode;A.encoding='utf-16-le'
+-			elif A.raw_buffer.startswith(codecs.BOM_UTF16_BE):A.raw_decode=codecs.utf_16_be_decode;A.encoding='utf-16-be'
+-			else:A.raw_decode=codecs.utf_8_decode;A.encoding='utf-8'
+-		A.update(1)
+-	if UNICODE_SIZE==2:NON_PRINTABLE=RegExp('[^\t\n\r -~\x85\xa0-\ud7ff\ue000-�]')
+-	else:NON_PRINTABLE=RegExp('[^\t\n\r -~\x85\xa0-\ud7ff\ue000-�𐀀-\U0010ffff]')
+-	_printable_ascii=('\t\n\r'+''.join(map(chr,range(32,127)))).encode(_C)
+-	@classmethod
+-	def _get_non_printable_ascii(D,data):
+-		A=data.encode(_C);B=A.translate(_A,D._printable_ascii)
+-		if not B:return _A
+-		C=B[:1];return A.index(C),C.decode(_C)
+-	@classmethod
+-	def _get_non_printable_regex(B,data):
+-		A=B.NON_PRINTABLE.search(data)
+-		if not bool(A):return _A
+-		return A.start(),A.group()
+-	@classmethod
+-	def _get_non_printable(A,data):
+-		try:return A._get_non_printable_ascii(data)
+-		except UnicodeEncodeError:return A._get_non_printable_regex(data)
+-	def check_printable(A,data):
+-		B=A._get_non_printable(data)
+-		if B is not _A:C,D=B;E=A.index+(len(A.buffer)-A.pointer)+C;raise ReaderError(A.name,E,ord(D),'unicode','special characters are not allowed')
+-	def update(A,length):
+-		if A.raw_buffer is _A:return
+-		A.buffer=A.buffer[A.pointer:];A.pointer=0
+-		while len(A.buffer)<length:
+-			if not A.eof:A.update_raw()
+-			if A.raw_decode is not _A:
+-				try:C,E=A.raw_decode(A.raw_buffer,'strict',A.eof)
+-				except UnicodeDecodeError as B:
+-					if PY3:F=A.raw_buffer[B.start]
+-					else:F=B.object[B.start]
+-					if A.stream is not _A:D=A.stream_pointer-len(A.raw_buffer)+B.start
+-					elif A.stream is not _A:D=A.stream_pointer-len(A.raw_buffer)+B.start
+-					else:D=B.start
+-					raise ReaderError(A.name,D,F,B.encoding,B.reason)
+-			else:C=A.raw_buffer;E=len(C)
+-			A.check_printable(C);A.buffer+=C;A.raw_buffer=A.raw_buffer[E:]
+-			if A.eof:A.buffer+=_E;A.raw_buffer=_A;break
+-	def update_raw(A,size=_A):
+-		C=size
+-		if C is _A:C=4096 if PY3 else 1024
+-		B=A.stream.read(C)
+-		if A.raw_buffer is _A:A.raw_buffer=B
+-		else:A.raw_buffer+=B
+-		A.stream_pointer+=len(B)
+-		if not B:A.eof=True
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/representer.py b/dynaconf/vendor/ruamel/yaml/representer.py
+deleted file mode 100644
+index dc6bc3d..0000000
+--- a/dynaconf/vendor/ruamel/yaml/representer.py
++++ /dev/null
+@@ -1,578 +0,0 @@
+-from __future__ import print_function,absolute_import,division
+-_e='tag:yaml.org,2002:'
+-_d='state'
+-_c='args'
+-_b='tag:yaml.org,2002:python/object:'
+-_a='__getstate__'
+-_Z='tag:yaml.org,2002:set'
+-_Y='-.inf'
+-_X='.inf'
+-_W='.nan'
+-_V='base64'
+-_U='utf-8'
+-_T='null'
+-_S='typ'
+-_R='tag:yaml.org,2002:python/object/new:'
+-_Q='tag:yaml.org,2002:timestamp'
+-_P='tag:yaml.org,2002:map'
+-_O='tag:yaml.org,2002:seq'
+-_N='tag:yaml.org,2002:float'
+-_M='tag:yaml.org,2002:binary'
+-_L='tag:yaml.org,2002:null'
+-_K=0.0
+-_J='|'
+-_I='%s.%s'
+-_H='.'
+-_G='tag:yaml.org,2002:int'
+-_F='comment'
+-_E='ascii'
+-_D='tag:yaml.org,2002:str'
+-_C=False
+-_B=True
+-_A=None
+-from .error import *
+-from .nodes import *
+-from .compat import text_type,binary_type,to_unicode,PY2,PY3
+-from .compat import ordereddict
+-from .compat import nprint,nprintf
+-from .scalarstring import LiteralScalarString,FoldedScalarString,SingleQuotedScalarString,DoubleQuotedScalarString,PlainScalarString
+-from .scalarint import ScalarInt,BinaryInt,OctalInt,HexInt,HexCapsInt
+-from .scalarfloat import ScalarFloat
+-from .scalarbool import ScalarBoolean
+-from .timestamp import TimeStamp
+-import datetime,sys,types
+-if PY3:import copyreg,base64
+-else:import copy_reg as copyreg
+-if _C:from typing import Dict,List,Any,Union,Text,Optional
+-__all__=['BaseRepresenter','SafeRepresenter','Representer','RepresenterError','RoundTripRepresenter']
+-class RepresenterError(YAMLError):0
+-if PY2:
+-	def get_classobj_bases(cls):
+-		bases=[cls]
+-		for base in cls.__bases__:bases.extend(get_classobj_bases(base))
+-		return bases
+-class BaseRepresenter:
+-	yaml_representers={};yaml_multi_representers={}
+-	def __init__(self,default_style=_A,default_flow_style=_A,dumper=_A):
+-		self.dumper=dumper
+-		if self.dumper is not _A:self.dumper._representer=self
+-		self.default_style=default_style;self.default_flow_style=default_flow_style;self.represented_objects={};self.object_keeper=[];self.alias_key=_A;self.sort_base_mapping_type_on_output=_B
+-	@property
+-	def serializer(self):
+-		try:
+-			if hasattr(self.dumper,_S):return self.dumper.serializer
+-			return self.dumper._serializer
+-		except AttributeError:return self
+-	def represent(self,data):node=self.represent_data(data);self.serializer.serialize(node);self.represented_objects={};self.object_keeper=[];self.alias_key=_A
+-	def represent_data(self,data):
+-		if self.ignore_aliases(data):self.alias_key=_A
+-		else:self.alias_key=id(data)
+-		if self.alias_key is not _A:
+-			if self.alias_key in self.represented_objects:node=self.represented_objects[self.alias_key];return node
+-			self.object_keeper.append(data)
+-		data_types=type(data).__mro__
+-		if PY2:
+-			if isinstance(data,types.InstanceType):data_types=get_classobj_bases(data.__class__)+list(data_types)
+-		if data_types[0]in self.yaml_representers:node=self.yaml_representers[data_types[0]](self,data)
+-		else:
+-			for data_type in data_types:
+-				if data_type in self.yaml_multi_representers:node=self.yaml_multi_representers[data_type](self,data);break
+-			else:
+-				if _A in self.yaml_multi_representers:node=self.yaml_multi_representers[_A](self,data)
+-				elif _A in self.yaml_representers:node=self.yaml_representers[_A](self,data)
+-				else:node=ScalarNode(_A,text_type(data))
+-		return node
+-	def represent_key(self,data):return self.represent_data(data)
+-	@classmethod
+-	def add_representer(cls,data_type,representer):
+-		if'yaml_representers'not in cls.__dict__:cls.yaml_representers=cls.yaml_representers.copy()
+-		cls.yaml_representers[data_type]=representer
+-	@classmethod
+-	def add_multi_representer(cls,data_type,representer):
+-		if'yaml_multi_representers'not in cls.__dict__:cls.yaml_multi_representers=cls.yaml_multi_representers.copy()
+-		cls.yaml_multi_representers[data_type]=representer
+-	def represent_scalar(self,tag,value,style=_A,anchor=_A):
+-		if style is _A:style=self.default_style
+-		comment=_A
+-		if style and style[0]in'|>':
+-			comment=getattr(value,_F,_A)
+-			if comment:comment=[_A,[comment]]
+-		node=ScalarNode(tag,value,style=style,comment=comment,anchor=anchor)
+-		if self.alias_key is not _A:self.represented_objects[self.alias_key]=node
+-		return node
+-	def represent_sequence(self,tag,sequence,flow_style=_A):
+-		value=[];node=SequenceNode(tag,value,flow_style=flow_style)
+-		if self.alias_key is not _A:self.represented_objects[self.alias_key]=node
+-		best_style=_B
+-		for item in sequence:
+-			node_item=self.represent_data(item)
+-			if not(isinstance(node_item,ScalarNode)and not node_item.style):best_style=_C
+-			value.append(node_item)
+-		if flow_style is _A:
+-			if self.default_flow_style is not _A:node.flow_style=self.default_flow_style
+-			else:node.flow_style=best_style
+-		return node
+-	def represent_omap(self,tag,omap,flow_style=_A):
+-		value=[];node=SequenceNode(tag,value,flow_style=flow_style)
+-		if self.alias_key is not _A:self.represented_objects[self.alias_key]=node
+-		best_style=_B
+-		for item_key in omap:item_val=omap[item_key];node_item=self.represent_data({item_key:item_val});value.append(node_item)
+-		if flow_style is _A:
+-			if self.default_flow_style is not _A:node.flow_style=self.default_flow_style
+-			else:node.flow_style=best_style
+-		return node
+-	def represent_mapping(self,tag,mapping,flow_style=_A):
+-		value=[];node=MappingNode(tag,value,flow_style=flow_style)
+-		if self.alias_key is not _A:self.represented_objects[self.alias_key]=node
+-		best_style=_B
+-		if hasattr(mapping,'items'):
+-			mapping=list(mapping.items())
+-			if self.sort_base_mapping_type_on_output:
+-				try:mapping=sorted(mapping)
+-				except TypeError:pass
+-		for (item_key,item_value) in mapping:
+-			node_key=self.represent_key(item_key);node_value=self.represent_data(item_value)
+-			if not(isinstance(node_key,ScalarNode)and not node_key.style):best_style=_C
+-			if not(isinstance(node_value,ScalarNode)and not node_value.style):best_style=_C
+-			value.append((node_key,node_value))
+-		if flow_style is _A:
+-			if self.default_flow_style is not _A:node.flow_style=self.default_flow_style
+-			else:node.flow_style=best_style
+-		return node
+-	def ignore_aliases(self,data):return _C
+-class SafeRepresenter(BaseRepresenter):
+-	def ignore_aliases(self,data):
+-		if data is _A or isinstance(data,tuple)and data==():return _B
+-		if isinstance(data,(binary_type,text_type,bool,int,float)):return _B
+-		return _C
+-	def represent_none(self,data):return self.represent_scalar(_L,_T)
+-	if PY3:
+-		def represent_str(self,data):return self.represent_scalar(_D,data)
+-		def represent_binary(self,data):
+-			if hasattr(base64,'encodebytes'):data=base64.encodebytes(data).decode(_E)
+-			else:data=base64.encodestring(data).decode(_E)
+-			return self.represent_scalar(_M,data,style=_J)
+-	else:
+-		def represent_str(self,data):
+-			tag=_A;style=_A
+-			try:data=unicode(data,_E);tag=_D
+-			except UnicodeDecodeError:
+-				try:data=unicode(data,_U);tag=_D
+-				except UnicodeDecodeError:data=data.encode(_V);tag=_M;style=_J
+-			return self.represent_scalar(tag,data,style=style)
+-		def represent_unicode(self,data):return self.represent_scalar(_D,data)
+-	def represent_bool(self,data,anchor=_A):
+-		try:value=self.dumper.boolean_representation[bool(data)]
+-		except AttributeError:
+-			if data:value='true'
+-			else:value='false'
+-		return self.represent_scalar('tag:yaml.org,2002:bool',value,anchor=anchor)
+-	def represent_int(self,data):return self.represent_scalar(_G,text_type(data))
+-	if PY2:
+-		def represent_long(self,data):return self.represent_scalar(_G,text_type(data))
+-	inf_value=1e+300
+-	while repr(inf_value)!=repr(inf_value*inf_value):inf_value*=inf_value
+-	def represent_float(self,data):
+-		if data!=data or data==_K and data==1.0:value=_W
+-		elif data==self.inf_value:value=_X
+-		elif data==-self.inf_value:value=_Y
+-		else:
+-			value=to_unicode(repr(data)).lower()
+-			if getattr(self.serializer,'use_version',_A)==(1,1):
+-				if _H not in value and'e'in value:value=value.replace('e','.0e',1)
+-		return self.represent_scalar(_N,value)
+-	def represent_list(self,data):return self.represent_sequence(_O,data)
+-	def represent_dict(self,data):return self.represent_mapping(_P,data)
+-	def represent_ordereddict(self,data):return self.represent_omap('tag:yaml.org,2002:omap',data)
+-	def represent_set(self,data):
+-		value={}
+-		for key in data:value[key]=_A
+-		return self.represent_mapping(_Z,value)
+-	def represent_date(self,data):value=to_unicode(data.isoformat());return self.represent_scalar(_Q,value)
+-	def represent_datetime(self,data):value=to_unicode(data.isoformat(' '));return self.represent_scalar(_Q,value)
+-	def represent_yaml_object(self,tag,data,cls,flow_style=_A):
+-		if hasattr(data,_a):state=data.__getstate__()
+-		else:state=data.__dict__.copy()
+-		return self.represent_mapping(tag,state,flow_style=flow_style)
+-	def represent_undefined(self,data):raise RepresenterError('cannot represent an object: %s'%(data,))
+-SafeRepresenter.add_representer(type(_A),SafeRepresenter.represent_none)
+-SafeRepresenter.add_representer(str,SafeRepresenter.represent_str)
+-if PY2:SafeRepresenter.add_representer(unicode,SafeRepresenter.represent_unicode)
+-else:SafeRepresenter.add_representer(bytes,SafeRepresenter.represent_binary)
+-SafeRepresenter.add_representer(bool,SafeRepresenter.represent_bool)
+-SafeRepresenter.add_representer(int,SafeRepresenter.represent_int)
+-if PY2:SafeRepresenter.add_representer(long,SafeRepresenter.represent_long)
+-SafeRepresenter.add_representer(float,SafeRepresenter.represent_float)
+-SafeRepresenter.add_representer(list,SafeRepresenter.represent_list)
+-SafeRepresenter.add_representer(tuple,SafeRepresenter.represent_list)
+-SafeRepresenter.add_representer(dict,SafeRepresenter.represent_dict)
+-SafeRepresenter.add_representer(set,SafeRepresenter.represent_set)
+-SafeRepresenter.add_representer(ordereddict,SafeRepresenter.represent_ordereddict)
+-if sys.version_info>=(2,7):import collections;SafeRepresenter.add_representer(collections.OrderedDict,SafeRepresenter.represent_ordereddict)
+-SafeRepresenter.add_representer(datetime.date,SafeRepresenter.represent_date)
+-SafeRepresenter.add_representer(datetime.datetime,SafeRepresenter.represent_datetime)
+-SafeRepresenter.add_representer(_A,SafeRepresenter.represent_undefined)
+-class Representer(SafeRepresenter):
+-	if PY2:
+-		def represent_str(self,data):
+-			tag=_A;style=_A
+-			try:data=unicode(data,_E);tag=_D
+-			except UnicodeDecodeError:
+-				try:data=unicode(data,_U);tag='tag:yaml.org,2002:python/str'
+-				except UnicodeDecodeError:data=data.encode(_V);tag=_M;style=_J
+-			return self.represent_scalar(tag,data,style=style)
+-		def represent_unicode(self,data):
+-			tag=_A
+-			try:data.encode(_E);tag='tag:yaml.org,2002:python/unicode'
+-			except UnicodeEncodeError:tag=_D
+-			return self.represent_scalar(tag,data)
+-		def represent_long(self,data):
+-			tag=_G
+-			if int(data)is not data:tag='tag:yaml.org,2002:python/long'
+-			return self.represent_scalar(tag,to_unicode(data))
+-	def represent_complex(self,data):
+-		if data.imag==_K:data='%r'%data.real
+-		elif data.real==_K:data='%rj'%data.imag
+-		elif data.imag>0:data='%r+%rj'%(data.real,data.imag)
+-		else:data='%r%rj'%(data.real,data.imag)
+-		return self.represent_scalar('tag:yaml.org,2002:python/complex',data)
+-	def represent_tuple(self,data):return self.represent_sequence('tag:yaml.org,2002:python/tuple',data)
+-	def represent_name(self,data):
+-		try:name=_I%(data.__module__,data.__qualname__)
+-		except AttributeError:name=_I%(data.__module__,data.__name__)
+-		return self.represent_scalar('tag:yaml.org,2002:python/name:'+name,'')
+-	def represent_module(self,data):return self.represent_scalar('tag:yaml.org,2002:python/module:'+data.__name__,'')
+-	if PY2:
+-		def represent_instance(self,data):
+-			cls=data.__class__;class_name=_I%(cls.__module__,cls.__name__);args=_A;state=_A
+-			if hasattr(data,'__getinitargs__'):args=list(data.__getinitargs__())
+-			if hasattr(data,_a):state=data.__getstate__()
+-			else:state=data.__dict__
+-			if args is _A and isinstance(state,dict):return self.represent_mapping(_b+class_name,state)
+-			if isinstance(state,dict)and not state:return self.represent_sequence(_R+class_name,args)
+-			value={}
+-			if bool(args):value[_c]=args
+-			value[_d]=state;return self.represent_mapping(_R+class_name,value)
+-	def represent_object(self,data):
+-		cls=type(data)
+-		if cls in copyreg.dispatch_table:reduce=copyreg.dispatch_table[cls](data)
+-		elif hasattr(data,'__reduce_ex__'):reduce=data.__reduce_ex__(2)
+-		elif hasattr(data,'__reduce__'):reduce=data.__reduce__()
+-		else:raise RepresenterError('cannot represent object: %r'%(data,))
+-		reduce=(list(reduce)+[_A]*5)[:5];function,args,state,listitems,dictitems=reduce;args=list(args)
+-		if state is _A:state={}
+-		if listitems is not _A:listitems=list(listitems)
+-		if dictitems is not _A:dictitems=dict(dictitems)
+-		if function.__name__=='__newobj__':function=args[0];args=args[1:];tag=_R;newobj=_B
+-		else:tag='tag:yaml.org,2002:python/object/apply:';newobj=_C
+-		try:function_name=_I%(function.__module__,function.__qualname__)
+-		except AttributeError:function_name=_I%(function.__module__,function.__name__)
+-		if not args and not listitems and not dictitems and isinstance(state,dict)and newobj:return self.represent_mapping(_b+function_name,state)
+-		if not listitems and not dictitems and isinstance(state,dict)and not state:return self.represent_sequence(tag+function_name,args)
+-		value={}
+-		if args:value[_c]=args
+-		if state or not isinstance(state,dict):value[_d]=state
+-		if listitems:value['listitems']=listitems
+-		if dictitems:value['dictitems']=dictitems
+-		return self.represent_mapping(tag+function_name,value)
+-if PY2:Representer.add_representer(str,Representer.represent_str);Representer.add_representer(unicode,Representer.represent_unicode);Representer.add_representer(long,Representer.represent_long)
+-Representer.add_representer(complex,Representer.represent_complex)
+-Representer.add_representer(tuple,Representer.represent_tuple)
+-Representer.add_representer(type,Representer.represent_name)
+-if PY2:Representer.add_representer(types.ClassType,Representer.represent_name)
+-Representer.add_representer(types.FunctionType,Representer.represent_name)
+-Representer.add_representer(types.BuiltinFunctionType,Representer.represent_name)
+-Representer.add_representer(types.ModuleType,Representer.represent_module)
+-if PY2:Representer.add_multi_representer(types.InstanceType,Representer.represent_instance)
+-Representer.add_multi_representer(object,Representer.represent_object)
+-Representer.add_multi_representer(type,Representer.represent_name)
+-from .comments import CommentedMap,CommentedOrderedMap,CommentedSeq,CommentedKeySeq,CommentedKeyMap,CommentedSet,comment_attrib,merge_attrib,TaggedScalar
+-class RoundTripRepresenter(SafeRepresenter):
+-	def __init__(self,default_style=_A,default_flow_style=_A,dumper=_A):
+-		if not hasattr(dumper,_S)and default_flow_style is _A:default_flow_style=_C
+-		SafeRepresenter.__init__(self,default_style=default_style,default_flow_style=default_flow_style,dumper=dumper)
+-	def ignore_aliases(self,data):
+-		try:
+-			if data.anchor is not _A and data.anchor.value is not _A:return _C
+-		except AttributeError:pass
+-		return SafeRepresenter.ignore_aliases(self,data)
+-	def represent_none(self,data):
+-		if len(self.represented_objects)==0 and not self.serializer.use_explicit_start:return self.represent_scalar(_L,_T)
+-		return self.represent_scalar(_L,'')
+-	def represent_literal_scalarstring(self,data):
+-		tag=_A;style=_J;anchor=data.yaml_anchor(any=_B)
+-		if PY2 and not isinstance(data,unicode):data=unicode(data,_E)
+-		tag=_D;return self.represent_scalar(tag,data,style=style,anchor=anchor)
+-	represent_preserved_scalarstring=represent_literal_scalarstring
+-	def represent_folded_scalarstring(self,data):
+-		tag=_A;style='>';anchor=data.yaml_anchor(any=_B)
+-		for fold_pos in reversed(getattr(data,'fold_pos',[])):
+-			if data[fold_pos]==' 'and(fold_pos>0 and not data[fold_pos-1].isspace())and(fold_pos<len(data)and not data[fold_pos+1].isspace()):data=data[:fold_pos]+'\x07'+data[fold_pos:]
+-		if PY2 and not isinstance(data,unicode):data=unicode(data,_E)
+-		tag=_D;return self.represent_scalar(tag,data,style=style,anchor=anchor)
+-	def represent_single_quoted_scalarstring(self,data):
+-		tag=_A;style="'";anchor=data.yaml_anchor(any=_B)
+-		if PY2 and not isinstance(data,unicode):data=unicode(data,_E)
+-		tag=_D;return self.represent_scalar(tag,data,style=style,anchor=anchor)
+-	def represent_double_quoted_scalarstring(self,data):
+-		tag=_A;style='"';anchor=data.yaml_anchor(any=_B)
+-		if PY2 and not isinstance(data,unicode):data=unicode(data,_E)
+-		tag=_D;return self.represent_scalar(tag,data,style=style,anchor=anchor)
+-	def represent_plain_scalarstring(self,data):
+-		tag=_A;style='';anchor=data.yaml_anchor(any=_B)
+-		if PY2 and not isinstance(data,unicode):data=unicode(data,_E)
+-		tag=_D;return self.represent_scalar(tag,data,style=style,anchor=anchor)
+-	def insert_underscore(self,prefix,s,underscore,anchor=_A):
+-		A='_'
+-		if underscore is _A:return self.represent_scalar(_G,prefix+s,anchor=anchor)
+-		if underscore[0]:
+-			sl=list(s);pos=len(s)-underscore[0]
+-			while pos>0:sl.insert(pos,A);pos-=underscore[0]
+-			s=''.join(sl)
+-		if underscore[1]:s=A+s
+-		if underscore[2]:s+=A
+-		return self.represent_scalar(_G,prefix+s,anchor=anchor)
+-	def represent_scalar_int(self,data):
+-		if data._width is not _A:s='{:0{}d}'.format(data,data._width)
+-		else:s=format(data,'d')
+-		anchor=data.yaml_anchor(any=_B);return self.insert_underscore('',s,data._underscore,anchor=anchor)
+-	def represent_binary_int(self,data):
+-		if data._width is not _A:s='{:0{}b}'.format(data,data._width)
+-		else:s=format(data,'b')
+-		anchor=data.yaml_anchor(any=_B);return self.insert_underscore('0b',s,data._underscore,anchor=anchor)
+-	def represent_octal_int(self,data):
+-		if data._width is not _A:s='{:0{}o}'.format(data,data._width)
+-		else:s=format(data,'o')
+-		anchor=data.yaml_anchor(any=_B);return self.insert_underscore('0o',s,data._underscore,anchor=anchor)
+-	def represent_hex_int(self,data):
+-		if data._width is not _A:s='{:0{}x}'.format(data,data._width)
+-		else:s=format(data,'x')
+-		anchor=data.yaml_anchor(any=_B);return self.insert_underscore('0x',s,data._underscore,anchor=anchor)
+-	def represent_hex_caps_int(self,data):
+-		if data._width is not _A:s='{:0{}X}'.format(data,data._width)
+-		else:s=format(data,'X')
+-		anchor=data.yaml_anchor(any=_B);return self.insert_underscore('0x',s,data._underscore,anchor=anchor)
+-	def represent_scalar_float(self,data):
+-		C='+';B='{:{}0{}d}';A='0';value=_A;anchor=data.yaml_anchor(any=_B)
+-		if data!=data or data==_K and data==1.0:value=_W
+-		elif data==self.inf_value:value=_X
+-		elif data==-self.inf_value:value=_Y
+-		if value:return self.represent_scalar(_N,value,anchor=anchor)
+-		if data._exp is _A and data._prec>0 and data._prec==data._width-1:value='{}{:d}.'.format(data._m_sign if data._m_sign else'',abs(int(data)))
+-		elif data._exp is _A:
+-			prec=data._prec;ms=data._m_sign if data._m_sign else'';value='{}{:0{}.{}f}'.format(ms,abs(data),data._width-len(ms),data._width-prec-1)
+-			if prec==0 or prec==1 and ms!='':value=value.replace('0.',_H)
+-			while len(value)<data._width:value+=A
+-		else:
+-			m,es='{:{}.{}e}'.format(data,data._width,data._width+(1 if data._m_sign else 0)).split('e');w=data._width if data._prec>0 else data._width+1
+-			if data<0:w+=1
+-			m=m[:w];e=int(es);m1,m2=m.split(_H)
+-			while len(m1)+len(m2)<data._width-(1 if data._prec>=0 else 0):m2+=A
+-			if data._m_sign and data>0:m1=C+m1
+-			esgn=C if data._e_sign else''
+-			if data._prec<0:
+-				if m2!=A:e-=len(m2)
+-				else:m2=''
+-				while len(m1)+len(m2)-(1 if data._m_sign else 0)<data._width:m2+=A;e-=1
+-				value=m1+m2+data._exp+B.format(e,esgn,data._e_width)
+-			elif data._prec==0:e-=len(m2);value=m1+m2+_H+data._exp+B.format(e,esgn,data._e_width)
+-			else:
+-				if data._m_lead0>0:m2=A*(data._m_lead0-1)+m1+m2;m1=A;m2=m2[:-data._m_lead0];e+=data._m_lead0
+-				while len(m1)<data._prec:m1+=m2[0];m2=m2[1:];e-=1
+-				value=m1+_H+m2+data._exp+B.format(e,esgn,data._e_width)
+-		if value is _A:value=to_unicode(repr(data)).lower()
+-		return self.represent_scalar(_N,value,anchor=anchor)
+-	def represent_sequence(self,tag,sequence,flow_style=_A):
+-		value=[]
+-		try:flow_style=sequence.fa.flow_style(flow_style)
+-		except AttributeError:flow_style=flow_style
+-		try:anchor=sequence.yaml_anchor()
+-		except AttributeError:anchor=_A
+-		node=SequenceNode(tag,value,flow_style=flow_style,anchor=anchor)
+-		if self.alias_key is not _A:self.represented_objects[self.alias_key]=node
+-		best_style=_B
+-		try:
+-			comment=getattr(sequence,comment_attrib);node.comment=comment.comment
+-			if node.comment and node.comment[1]:
+-				for ct in node.comment[1]:ct.reset()
+-			item_comments=comment.items
+-			for v in item_comments.values():
+-				if v and v[1]:
+-					for ct in v[1]:ct.reset()
+-			item_comments=comment.items;node.comment=comment.comment
+-			try:node.comment.append(comment.end)
+-			except AttributeError:pass
+-		except AttributeError:item_comments={}
+-		for (idx,item) in enumerate(sequence):
+-			node_item=self.represent_data(item);self.merge_comments(node_item,item_comments.get(idx))
+-			if not(isinstance(node_item,ScalarNode)and not node_item.style):best_style=_C
+-			value.append(node_item)
+-		if flow_style is _A:
+-			if len(sequence)!=0 and self.default_flow_style is not _A:node.flow_style=self.default_flow_style
+-			else:node.flow_style=best_style
+-		return node
+-	def merge_comments(self,node,comments):
+-		if comments is _A:assert hasattr(node,_F);return node
+-		if getattr(node,_F,_A)is not _A:
+-			for (idx,val) in enumerate(comments):
+-				if idx>=len(node.comment):continue
+-				nc=node.comment[idx]
+-				if nc is not _A:assert val is _A or val==nc;comments[idx]=nc
+-		node.comment=comments;return node
+-	def represent_key(self,data):
+-		if isinstance(data,CommentedKeySeq):self.alias_key=_A;return self.represent_sequence(_O,data,flow_style=_B)
+-		if isinstance(data,CommentedKeyMap):self.alias_key=_A;return self.represent_mapping(_P,data,flow_style=_B)
+-		return SafeRepresenter.represent_key(self,data)
+-	def represent_mapping(self,tag,mapping,flow_style=_A):
+-		value=[]
+-		try:flow_style=mapping.fa.flow_style(flow_style)
+-		except AttributeError:flow_style=flow_style
+-		try:anchor=mapping.yaml_anchor()
+-		except AttributeError:anchor=_A
+-		node=MappingNode(tag,value,flow_style=flow_style,anchor=anchor)
+-		if self.alias_key is not _A:self.represented_objects[self.alias_key]=node
+-		best_style=_B
+-		try:
+-			comment=getattr(mapping,comment_attrib);node.comment=comment.comment
+-			if node.comment and node.comment[1]:
+-				for ct in node.comment[1]:ct.reset()
+-			item_comments=comment.items
+-			for v in item_comments.values():
+-				if v and v[1]:
+-					for ct in v[1]:ct.reset()
+-			try:node.comment.append(comment.end)
+-			except AttributeError:pass
+-		except AttributeError:item_comments={}
+-		merge_list=[m[1]for m in getattr(mapping,merge_attrib,[])]
+-		try:merge_pos=getattr(mapping,merge_attrib,[[0]])[0][0]
+-		except IndexError:merge_pos=0
+-		item_count=0
+-		if bool(merge_list):items=mapping.non_merged_items()
+-		else:items=mapping.items()
+-		for (item_key,item_value) in items:
+-			item_count+=1;node_key=self.represent_key(item_key);node_value=self.represent_data(item_value);item_comment=item_comments.get(item_key)
+-			if item_comment:
+-				assert getattr(node_key,_F,_A)is _A;node_key.comment=item_comment[:2];nvc=getattr(node_value,_F,_A)
+-				if nvc is not _A:nvc[0]=item_comment[2];nvc[1]=item_comment[3]
+-				else:node_value.comment=item_comment[2:]
+-			if not(isinstance(node_key,ScalarNode)and not node_key.style):best_style=_C
+-			if not(isinstance(node_value,ScalarNode)and not node_value.style):best_style=_C
+-			value.append((node_key,node_value))
+-		if flow_style is _A:
+-			if(item_count!=0 or bool(merge_list))and self.default_flow_style is not _A:node.flow_style=self.default_flow_style
+-			else:node.flow_style=best_style
+-		if bool(merge_list):
+-			if len(merge_list)==1:arg=self.represent_data(merge_list[0])
+-			else:arg=self.represent_data(merge_list);arg.flow_style=_B
+-			value.insert(merge_pos,(ScalarNode('tag:yaml.org,2002:merge','<<'),arg))
+-		return node
+-	def represent_omap(self,tag,omap,flow_style=_A):
+-		value=[]
+-		try:flow_style=omap.fa.flow_style(flow_style)
+-		except AttributeError:flow_style=flow_style
+-		try:anchor=omap.yaml_anchor()
+-		except AttributeError:anchor=_A
+-		node=SequenceNode(tag,value,flow_style=flow_style,anchor=anchor)
+-		if self.alias_key is not _A:self.represented_objects[self.alias_key]=node
+-		best_style=_B
+-		try:
+-			comment=getattr(omap,comment_attrib);node.comment=comment.comment
+-			if node.comment and node.comment[1]:
+-				for ct in node.comment[1]:ct.reset()
+-			item_comments=comment.items
+-			for v in item_comments.values():
+-				if v and v[1]:
+-					for ct in v[1]:ct.reset()
+-			try:node.comment.append(comment.end)
+-			except AttributeError:pass
+-		except AttributeError:item_comments={}
+-		for item_key in omap:
+-			item_val=omap[item_key];node_item=self.represent_data({item_key:item_val});item_comment=item_comments.get(item_key)
+-			if item_comment:
+-				if item_comment[1]:node_item.comment=[_A,item_comment[1]]
+-				assert getattr(node_item.value[0][0],_F,_A)is _A;node_item.value[0][0].comment=[item_comment[0],_A];nvc=getattr(node_item.value[0][1],_F,_A)
+-				if nvc is not _A:nvc[0]=item_comment[2];nvc[1]=item_comment[3]
+-				else:node_item.value[0][1].comment=item_comment[2:]
+-			value.append(node_item)
+-		if flow_style is _A:
+-			if self.default_flow_style is not _A:node.flow_style=self.default_flow_style
+-			else:node.flow_style=best_style
+-		return node
+-	def represent_set(self,setting):
+-		flow_style=_C;tag=_Z;value=[];flow_style=setting.fa.flow_style(flow_style)
+-		try:anchor=setting.yaml_anchor()
+-		except AttributeError:anchor=_A
+-		node=MappingNode(tag,value,flow_style=flow_style,anchor=anchor)
+-		if self.alias_key is not _A:self.represented_objects[self.alias_key]=node
+-		best_style=_B
+-		try:
+-			comment=getattr(setting,comment_attrib);node.comment=comment.comment
+-			if node.comment and node.comment[1]:
+-				for ct in node.comment[1]:ct.reset()
+-			item_comments=comment.items
+-			for v in item_comments.values():
+-				if v and v[1]:
+-					for ct in v[1]:ct.reset()
+-			try:node.comment.append(comment.end)
+-			except AttributeError:pass
+-		except AttributeError:item_comments={}
+-		for item_key in setting.odict:
+-			node_key=self.represent_key(item_key);node_value=self.represent_data(_A);item_comment=item_comments.get(item_key)
+-			if item_comment:assert getattr(node_key,_F,_A)is _A;node_key.comment=item_comment[:2]
+-			node_key.style=node_value.style='?'
+-			if not(isinstance(node_key,ScalarNode)and not node_key.style):best_style=_C
+-			if not(isinstance(node_value,ScalarNode)and not node_value.style):best_style=_C
+-			value.append((node_key,node_value))
+-		best_style=best_style;return node
+-	def represent_dict(self,data):
+-		try:t=data.tag.value
+-		except AttributeError:t=_A
+-		if t:
+-			if t.startswith('!!'):tag=_e+t[2:]
+-			else:tag=t
+-		else:tag=_P
+-		return self.represent_mapping(tag,data)
+-	def represent_list(self,data):
+-		try:t=data.tag.value
+-		except AttributeError:t=_A
+-		if t:
+-			if t.startswith('!!'):tag=_e+t[2:]
+-			else:tag=t
+-		else:tag=_O
+-		return self.represent_sequence(tag,data)
+-	def represent_datetime(self,data):
+-		B='tz';A='delta';inter='T'if data._yaml['t']else' ';_yaml=data._yaml
+-		if _yaml[A]:data+=_yaml[A];value=data.isoformat(inter)
+-		else:value=data.isoformat(inter)
+-		if _yaml[B]:value+=_yaml[B]
+-		return self.represent_scalar(_Q,to_unicode(value))
+-	def represent_tagged_scalar(self,data):
+-		try:tag=data.tag.value
+-		except AttributeError:tag=_A
+-		try:anchor=data.yaml_anchor()
+-		except AttributeError:anchor=_A
+-		return self.represent_scalar(tag,data.value,style=data.style,anchor=anchor)
+-	def represent_scalar_bool(self,data):
+-		try:anchor=data.yaml_anchor()
+-		except AttributeError:anchor=_A
+-		return SafeRepresenter.represent_bool(self,data,anchor=anchor)
+-RoundTripRepresenter.add_representer(type(_A),RoundTripRepresenter.represent_none)
+-RoundTripRepresenter.add_representer(LiteralScalarString,RoundTripRepresenter.represent_literal_scalarstring)
+-RoundTripRepresenter.add_representer(FoldedScalarString,RoundTripRepresenter.represent_folded_scalarstring)
+-RoundTripRepresenter.add_representer(SingleQuotedScalarString,RoundTripRepresenter.represent_single_quoted_scalarstring)
+-RoundTripRepresenter.add_representer(DoubleQuotedScalarString,RoundTripRepresenter.represent_double_quoted_scalarstring)
+-RoundTripRepresenter.add_representer(PlainScalarString,RoundTripRepresenter.represent_plain_scalarstring)
+-RoundTripRepresenter.add_representer(ScalarInt,RoundTripRepresenter.represent_scalar_int)
+-RoundTripRepresenter.add_representer(BinaryInt,RoundTripRepresenter.represent_binary_int)
+-RoundTripRepresenter.add_representer(OctalInt,RoundTripRepresenter.represent_octal_int)
+-RoundTripRepresenter.add_representer(HexInt,RoundTripRepresenter.represent_hex_int)
+-RoundTripRepresenter.add_representer(HexCapsInt,RoundTripRepresenter.represent_hex_caps_int)
+-RoundTripRepresenter.add_representer(ScalarFloat,RoundTripRepresenter.represent_scalar_float)
+-RoundTripRepresenter.add_representer(ScalarBoolean,RoundTripRepresenter.represent_scalar_bool)
+-RoundTripRepresenter.add_representer(CommentedSeq,RoundTripRepresenter.represent_list)
+-RoundTripRepresenter.add_representer(CommentedMap,RoundTripRepresenter.represent_dict)
+-RoundTripRepresenter.add_representer(CommentedOrderedMap,RoundTripRepresenter.represent_ordereddict)
+-if sys.version_info>=(2,7):import collections;RoundTripRepresenter.add_representer(collections.OrderedDict,RoundTripRepresenter.represent_ordereddict)
+-RoundTripRepresenter.add_representer(CommentedSet,RoundTripRepresenter.represent_set)
+-RoundTripRepresenter.add_representer(TaggedScalar,RoundTripRepresenter.represent_tagged_scalar)
+-RoundTripRepresenter.add_representer(TimeStamp,RoundTripRepresenter.represent_datetime)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/resolver.py b/dynaconf/vendor/ruamel/yaml/resolver.py
+deleted file mode 100644
+index 7377ca5..0000000
+--- a/dynaconf/vendor/ruamel/yaml/resolver.py
++++ /dev/null
+@@ -1,160 +0,0 @@
+-from __future__ import absolute_import
+-_J='yaml_implicit_resolvers'
+-_I='typ'
+-_H='-+0123456789'
+-_G='tag:yaml.org,2002:int'
+-_F='-+0123456789.'
+-_E='tag:yaml.org,2002:float'
+-_D='tag:yaml.org,2002:bool'
+-_C=True
+-_B=False
+-_A=None
+-import re
+-if _B:from typing import Any,Dict,List,Union,Text,Optional;from .compat import VersionType
+-from .compat import string_types,_DEFAULT_YAML_VERSION
+-from .error import *
+-from .nodes import MappingNode,ScalarNode,SequenceNode
+-from .util import RegExp
+-__all__=['BaseResolver','Resolver','VersionedResolver']
+-implicit_resolvers=[([(1,2)],_D,RegExp('^(?:true|True|TRUE|false|False|FALSE)$',re.X),list('tTfF')),([(1,1)],_D,RegExp('^(?:y|Y|yes|Yes|YES|n|N|no|No|NO\n        |true|True|TRUE|false|False|FALSE\n        |on|On|ON|off|Off|OFF)$',re.X),list('yYnNtTfFoO')),([(1,2)],_E,RegExp('^(?:\n         [-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)?\n        |[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+)\n        |[-+]?\\.[0-9_]+(?:[eE][-+][0-9]+)?\n        |[-+]?\\.(?:inf|Inf|INF)\n        |\\.(?:nan|NaN|NAN))$',re.X),list(_F)),([(1,1)],_E,RegExp('^(?:\n         [-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)?\n        |[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+)\n        |\\.[0-9_]+(?:[eE][-+][0-9]+)?\n        |[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\\.[0-9_]*  # sexagesimal float\n        |[-+]?\\.(?:inf|Inf|INF)\n        |\\.(?:nan|NaN|NAN))$',re.X),list(_F)),([(1,2)],_G,RegExp('^(?:[-+]?0b[0-1_]+\n        |[-+]?0o?[0-7_]+\n        |[-+]?[0-9_]+\n        |[-+]?0x[0-9a-fA-F_]+)$',re.X),list(_H)),([(1,1)],_G,RegExp('^(?:[-+]?0b[0-1_]+\n        |[-+]?0?[0-7_]+\n        |[-+]?(?:0|[1-9][0-9_]*)\n        |[-+]?0x[0-9a-fA-F_]+\n        |[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$',re.X),list(_H)),([(1,2),(1,1)],'tag:yaml.org,2002:merge',RegExp('^(?:<<)$'),['<']),([(1,2),(1,1)],'tag:yaml.org,2002:null',RegExp('^(?: ~\n        |null|Null|NULL\n        | )$',re.X),['~','n','N','']),([(1,2),(1,1)],'tag:yaml.org,2002:timestamp',RegExp('^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]\n        |[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?\n        (?:[Tt]|[ \\t]+)[0-9][0-9]?\n        :[0-9][0-9] :[0-9][0-9] (?:\\.[0-9]*)?\n        (?:[ \\t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$',re.X),list('0123456789')),([(1,2),(1,1)],'tag:yaml.org,2002:value',RegExp('^(?:=)$'),['=']),([(1,2),(1,1)],'tag:yaml.org,2002:yaml',RegExp('^(?:!|&|\\*)$'),list('!&*'))]
+-class ResolverError(YAMLError):0
+-class BaseResolver:
+-	DEFAULT_SCALAR_TAG='tag:yaml.org,2002:str';DEFAULT_SEQUENCE_TAG='tag:yaml.org,2002:seq';DEFAULT_MAPPING_TAG='tag:yaml.org,2002:map';yaml_implicit_resolvers={};yaml_path_resolvers={}
+-	def __init__(self,loadumper=_A):
+-		self.loadumper=loadumper
+-		if self.loadumper is not _A and getattr(self.loadumper,'_resolver',_A)is _A:self.loadumper._resolver=self.loadumper
+-		self._loader_version=_A;self.resolver_exact_paths=[];self.resolver_prefix_paths=[]
+-	@property
+-	def parser(self):
+-		if self.loadumper is not _A:
+-			if hasattr(self.loadumper,_I):return self.loadumper.parser
+-			return self.loadumper._parser
+-		return _A
+-	@classmethod
+-	def add_implicit_resolver_base(cls,tag,regexp,first):
+-		if _J not in cls.__dict__:cls.yaml_implicit_resolvers=dict(((k,cls.yaml_implicit_resolvers[k][:])for k in cls.yaml_implicit_resolvers))
+-		if first is _A:first=[_A]
+-		for ch in first:cls.yaml_implicit_resolvers.setdefault(ch,[]).append((tag,regexp))
+-	@classmethod
+-	def add_implicit_resolver(cls,tag,regexp,first):
+-		if _J not in cls.__dict__:cls.yaml_implicit_resolvers=dict(((k,cls.yaml_implicit_resolvers[k][:])for k in cls.yaml_implicit_resolvers))
+-		if first is _A:first=[_A]
+-		for ch in first:cls.yaml_implicit_resolvers.setdefault(ch,[]).append((tag,regexp))
+-		implicit_resolvers.append(([(1,2),(1,1)],tag,regexp,first))
+-	@classmethod
+-	def add_path_resolver(cls,tag,path,kind=_A):
+-		if'yaml_path_resolvers'not in cls.__dict__:cls.yaml_path_resolvers=cls.yaml_path_resolvers.copy()
+-		new_path=[]
+-		for element in path:
+-			if isinstance(element,(list,tuple)):
+-				if len(element)==2:node_check,index_check=element
+-				elif len(element)==1:node_check=element[0];index_check=_C
+-				else:raise ResolverError('Invalid path element: %s'%(element,))
+-			else:node_check=_A;index_check=element
+-			if node_check is str:node_check=ScalarNode
+-			elif node_check is list:node_check=SequenceNode
+-			elif node_check is dict:node_check=MappingNode
+-			elif node_check not in[ScalarNode,SequenceNode,MappingNode]and not isinstance(node_check,string_types)and node_check is not _A:raise ResolverError('Invalid node checker: %s'%(node_check,))
+-			if not isinstance(index_check,(string_types,int))and index_check is not _A:raise ResolverError('Invalid index checker: %s'%(index_check,))
+-			new_path.append((node_check,index_check))
+-		if kind is str:kind=ScalarNode
+-		elif kind is list:kind=SequenceNode
+-		elif kind is dict:kind=MappingNode
+-		elif kind not in[ScalarNode,SequenceNode,MappingNode]and kind is not _A:raise ResolverError('Invalid node kind: %s'%(kind,))
+-		cls.yaml_path_resolvers[tuple(new_path),kind]=tag
+-	def descend_resolver(self,current_node,current_index):
+-		if not self.yaml_path_resolvers:return
+-		exact_paths={};prefix_paths=[]
+-		if current_node:
+-			depth=len(self.resolver_prefix_paths)
+-			for (path,kind) in self.resolver_prefix_paths[-1]:
+-				if self.check_resolver_prefix(depth,path,kind,current_node,current_index):
+-					if len(path)>depth:prefix_paths.append((path,kind))
+-					else:exact_paths[kind]=self.yaml_path_resolvers[path,kind]
+-		else:
+-			for (path,kind) in self.yaml_path_resolvers:
+-				if not path:exact_paths[kind]=self.yaml_path_resolvers[path,kind]
+-				else:prefix_paths.append((path,kind))
+-		self.resolver_exact_paths.append(exact_paths);self.resolver_prefix_paths.append(prefix_paths)
+-	def ascend_resolver(self):
+-		if not self.yaml_path_resolvers:return
+-		self.resolver_exact_paths.pop();self.resolver_prefix_paths.pop()
+-	def check_resolver_prefix(self,depth,path,kind,current_node,current_index):
+-		node_check,index_check=path[depth-1]
+-		if isinstance(node_check,string_types):
+-			if current_node.tag!=node_check:return _B
+-		elif node_check is not _A:
+-			if not isinstance(current_node,node_check):return _B
+-		if index_check is _C and current_index is not _A:return _B
+-		if(index_check is _B or index_check is _A)and current_index is _A:return _B
+-		if isinstance(index_check,string_types):
+-			if not(isinstance(current_index,ScalarNode)and index_check==current_index.value):return _B
+-		elif isinstance(index_check,int)and not isinstance(index_check,bool):
+-			if index_check!=current_index:return _B
+-		return _C
+-	def resolve(self,kind,value,implicit):
+-		if kind is ScalarNode and implicit[0]:
+-			if value=='':resolvers=self.yaml_implicit_resolvers.get('',[])
+-			else:resolvers=self.yaml_implicit_resolvers.get(value[0],[])
+-			resolvers+=self.yaml_implicit_resolvers.get(_A,[])
+-			for (tag,regexp) in resolvers:
+-				if regexp.match(value):return tag
+-			implicit=implicit[1]
+-		if bool(self.yaml_path_resolvers):
+-			exact_paths=self.resolver_exact_paths[-1]
+-			if kind in exact_paths:return exact_paths[kind]
+-			if _A in exact_paths:return exact_paths[_A]
+-		if kind is ScalarNode:return self.DEFAULT_SCALAR_TAG
+-		elif kind is SequenceNode:return self.DEFAULT_SEQUENCE_TAG
+-		elif kind is MappingNode:return self.DEFAULT_MAPPING_TAG
+-	@property
+-	def processing_version(self):return _A
+-class Resolver(BaseResolver):0
+-for ir in implicit_resolvers:
+-	if(1,2)in ir[0]:Resolver.add_implicit_resolver_base(*ir[1:])
+-class VersionedResolver(BaseResolver):
+-	def __init__(self,version=_A,loader=_A,loadumper=_A):
+-		if loader is _A and loadumper is not _A:loader=loadumper
+-		BaseResolver.__init__(self,loader);self._loader_version=self.get_loader_version(version);self._version_implicit_resolver={}
+-	def add_version_implicit_resolver(self,version,tag,regexp,first):
+-		if first is _A:first=[_A]
+-		impl_resolver=self._version_implicit_resolver.setdefault(version,{})
+-		for ch in first:impl_resolver.setdefault(ch,[]).append((tag,regexp))
+-	def get_loader_version(self,version):
+-		if version is _A or isinstance(version,tuple):return version
+-		if isinstance(version,list):return tuple(version)
+-		return tuple(map(int,version.split('.')))
+-	@property
+-	def versioned_resolver(self):
+-		version=self.processing_version
+-		if version not in self._version_implicit_resolver:
+-			for x in implicit_resolvers:
+-				if version in x[0]:self.add_version_implicit_resolver(version,x[1],x[2],x[3])
+-		return self._version_implicit_resolver[version]
+-	def resolve(self,kind,value,implicit):
+-		if kind is ScalarNode and implicit[0]:
+-			if value=='':resolvers=self.versioned_resolver.get('',[])
+-			else:resolvers=self.versioned_resolver.get(value[0],[])
+-			resolvers+=self.versioned_resolver.get(_A,[])
+-			for (tag,regexp) in resolvers:
+-				if regexp.match(value):return tag
+-			implicit=implicit[1]
+-		if bool(self.yaml_path_resolvers):
+-			exact_paths=self.resolver_exact_paths[-1]
+-			if kind in exact_paths:return exact_paths[kind]
+-			if _A in exact_paths:return exact_paths[_A]
+-		if kind is ScalarNode:return self.DEFAULT_SCALAR_TAG
+-		elif kind is SequenceNode:return self.DEFAULT_SEQUENCE_TAG
+-		elif kind is MappingNode:return self.DEFAULT_MAPPING_TAG
+-	@property
+-	def processing_version(self):
+-		try:version=self.loadumper._scanner.yaml_version
+-		except AttributeError:
+-			try:
+-				if hasattr(self.loadumper,_I):version=self.loadumper.version
+-				else:version=self.loadumper._serializer.use_version
+-			except AttributeError:version=_A
+-		if version is _A:
+-			version=self._loader_version
+-			if version is _A:version=_DEFAULT_YAML_VERSION
+-		return version
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/scalarbool.py b/dynaconf/vendor/ruamel/yaml/scalarbool.py
+deleted file mode 100644
+index 84c7cc2..0000000
+--- a/dynaconf/vendor/ruamel/yaml/scalarbool.py
++++ /dev/null
+@@ -1,21 +0,0 @@
+-from __future__ import print_function,absolute_import,division,unicode_literals
+-_B=False
+-_A=None
+-from .anchor import Anchor
+-if _B:from typing import Text,Any,Dict,List
+-__all__=['ScalarBoolean']
+-class ScalarBoolean(int):
+-	def __new__(D,*E,**A):
+-		B=A.pop('anchor',_A);C=int.__new__(D,*E,**A)
+-		if B is not _A:C.yaml_set_anchor(B,always_dump=True)
+-		return C
+-	@property
+-	def anchor(self):
+-		A=self
+-		if not hasattr(A,Anchor.attrib):setattr(A,Anchor.attrib,Anchor())
+-		return getattr(A,Anchor.attrib)
+-	def yaml_anchor(A,any=_B):
+-		if not hasattr(A,Anchor.attrib):return _A
+-		if any or A.anchor.always_dump:return A.anchor
+-		return _A
+-	def yaml_set_anchor(A,value,always_dump=_B):A.anchor.value=value;A.anchor.always_dump=always_dump
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/scalarfloat.py b/dynaconf/vendor/ruamel/yaml/scalarfloat.py
+deleted file mode 100644
+index fab3a1b..0000000
+--- a/dynaconf/vendor/ruamel/yaml/scalarfloat.py
++++ /dev/null
+@@ -1,33 +0,0 @@
+-from __future__ import print_function,absolute_import,division,unicode_literals
+-_B=False
+-_A=None
+-import sys
+-from .compat import no_limit_int
+-from .anchor import Anchor
+-if _B:from typing import Text,Any,Dict,List
+-__all__=['ScalarFloat','ExponentialFloat','ExponentialCapsFloat']
+-class ScalarFloat(float):
+-	def __new__(D,*E,**A):
+-		F=A.pop('width',_A);G=A.pop('prec',_A);H=A.pop('m_sign',_A);I=A.pop('m_lead0',0);J=A.pop('exp',_A);K=A.pop('e_width',_A);L=A.pop('e_sign',_A);M=A.pop('underscore',_A);C=A.pop('anchor',_A);B=float.__new__(D,*E,**A);B._width=F;B._prec=G;B._m_sign=H;B._m_lead0=I;B._exp=J;B._e_width=K;B._e_sign=L;B._underscore=M
+-		if C is not _A:B.yaml_set_anchor(C,always_dump=True)
+-		return B
+-	def __iadd__(A,a):return float(A)+a;B=type(A)(A+a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;return B
+-	def __ifloordiv__(A,a):return float(A)//a;B=type(A)(A//a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;return B
+-	def __imul__(A,a):return float(A)*a;B=type(A)(A*a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;B._prec=A._prec;return B
+-	def __ipow__(A,a):return float(A)**a;B=type(A)(A**a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;return B
+-	def __isub__(A,a):return float(A)-a;B=type(A)(A-a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;return B
+-	@property
+-	def anchor(self):
+-		A=self
+-		if not hasattr(A,Anchor.attrib):setattr(A,Anchor.attrib,Anchor())
+-		return getattr(A,Anchor.attrib)
+-	def yaml_anchor(A,any=_B):
+-		if not hasattr(A,Anchor.attrib):return _A
+-		if any or A.anchor.always_dump:return A.anchor
+-		return _A
+-	def yaml_set_anchor(A,value,always_dump=_B):A.anchor.value=value;A.anchor.always_dump=always_dump
+-	def dump(A,out=sys.stdout):out.write('ScalarFloat({}| w:{}, p:{}, s:{}, lz:{}, _:{}|{}, w:{}, s:{})\n'.format(A,A._width,A._prec,A._m_sign,A._m_lead0,A._underscore,A._exp,A._e_width,A._e_sign))
+-class ExponentialFloat(ScalarFloat):
+-	def __new__(A,value,width=_A,underscore=_A):return ScalarFloat.__new__(A,value,width=width,underscore=underscore)
+-class ExponentialCapsFloat(ScalarFloat):
+-	def __new__(A,value,width=_A,underscore=_A):return ScalarFloat.__new__(A,value,width=width,underscore=underscore)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/scalarint.py b/dynaconf/vendor/ruamel/yaml/scalarint.py
+deleted file mode 100644
+index e61b7eb..0000000
+--- a/dynaconf/vendor/ruamel/yaml/scalarint.py
++++ /dev/null
+@@ -1,37 +0,0 @@
+-from __future__ import print_function,absolute_import,division,unicode_literals
+-_B=False
+-_A=None
+-from .compat import no_limit_int
+-from .anchor import Anchor
+-if _B:from typing import Text,Any,Dict,List
+-__all__=['ScalarInt','BinaryInt','OctalInt','HexInt','HexCapsInt','DecimalInt']
+-class ScalarInt(no_limit_int):
+-	def __new__(D,*E,**A):
+-		F=A.pop('width',_A);G=A.pop('underscore',_A);C=A.pop('anchor',_A);B=no_limit_int.__new__(D,*E,**A);B._width=F;B._underscore=G
+-		if C is not _A:B.yaml_set_anchor(C,always_dump=True)
+-		return B
+-	def __iadd__(A,a):B=type(A)(A+a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;return B
+-	def __ifloordiv__(A,a):B=type(A)(A//a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;return B
+-	def __imul__(A,a):B=type(A)(A*a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;return B
+-	def __ipow__(A,a):B=type(A)(A**a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;return B
+-	def __isub__(A,a):B=type(A)(A-a);B._width=A._width;B._underscore=A._underscore[:]if A._underscore is not _A else _A;return B
+-	@property
+-	def anchor(self):
+-		A=self
+-		if not hasattr(A,Anchor.attrib):setattr(A,Anchor.attrib,Anchor())
+-		return getattr(A,Anchor.attrib)
+-	def yaml_anchor(A,any=_B):
+-		if not hasattr(A,Anchor.attrib):return _A
+-		if any or A.anchor.always_dump:return A.anchor
+-		return _A
+-	def yaml_set_anchor(A,value,always_dump=_B):A.anchor.value=value;A.anchor.always_dump=always_dump
+-class BinaryInt(ScalarInt):
+-	def __new__(A,value,width=_A,underscore=_A,anchor=_A):return ScalarInt.__new__(A,value,width=width,underscore=underscore,anchor=anchor)
+-class OctalInt(ScalarInt):
+-	def __new__(A,value,width=_A,underscore=_A,anchor=_A):return ScalarInt.__new__(A,value,width=width,underscore=underscore,anchor=anchor)
+-class HexInt(ScalarInt):
+-	def __new__(A,value,width=_A,underscore=_A,anchor=_A):return ScalarInt.__new__(A,value,width=width,underscore=underscore,anchor=anchor)
+-class HexCapsInt(ScalarInt):
+-	def __new__(A,value,width=_A,underscore=_A,anchor=_A):return ScalarInt.__new__(A,value,width=width,underscore=underscore,anchor=anchor)
+-class DecimalInt(ScalarInt):
+-	def __new__(A,value,width=_A,underscore=_A,anchor=_A):return ScalarInt.__new__(A,value,width=width,underscore=underscore,anchor=anchor)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/scalarstring.py b/dynaconf/vendor/ruamel/yaml/scalarstring.py
+deleted file mode 100644
+index 53b9c39..0000000
+--- a/dynaconf/vendor/ruamel/yaml/scalarstring.py
++++ /dev/null
+@@ -1,59 +0,0 @@
+-from __future__ import print_function,absolute_import,division,unicode_literals
+-_D='comment'
+-_C='\n'
+-_B=False
+-_A=None
+-from .compat import text_type
+-from .anchor import Anchor
+-if _B:from typing import Text,Any,Dict,List
+-__all__=['ScalarString','LiteralScalarString','FoldedScalarString','SingleQuotedScalarString','DoubleQuotedScalarString','PlainScalarString','PreservedScalarString']
+-class ScalarString(text_type):
+-	__slots__=Anchor.attrib
+-	def __new__(D,*E,**A):
+-		B=A.pop('anchor',_A);C=text_type.__new__(D,*E,**A)
+-		if B is not _A:C.yaml_set_anchor(B,always_dump=True)
+-		return C
+-	def replace(A,old,new,maxreplace=-1):return type(A)(text_type.replace(A,old,new,maxreplace))
+-	@property
+-	def anchor(self):
+-		A=self
+-		if not hasattr(A,Anchor.attrib):setattr(A,Anchor.attrib,Anchor())
+-		return getattr(A,Anchor.attrib)
+-	def yaml_anchor(A,any=_B):
+-		if not hasattr(A,Anchor.attrib):return _A
+-		if any or A.anchor.always_dump:return A.anchor
+-		return _A
+-	def yaml_set_anchor(A,value,always_dump=_B):A.anchor.value=value;A.anchor.always_dump=always_dump
+-class LiteralScalarString(ScalarString):
+-	__slots__=_D;style='|'
+-	def __new__(A,value,anchor=_A):return ScalarString.__new__(A,value,anchor=anchor)
+-PreservedScalarString=LiteralScalarString
+-class FoldedScalarString(ScalarString):
+-	__slots__='fold_pos',_D;style='>'
+-	def __new__(A,value,anchor=_A):return ScalarString.__new__(A,value,anchor=anchor)
+-class SingleQuotedScalarString(ScalarString):
+-	__slots__=();style="'"
+-	def __new__(A,value,anchor=_A):return ScalarString.__new__(A,value,anchor=anchor)
+-class DoubleQuotedScalarString(ScalarString):
+-	__slots__=();style='"'
+-	def __new__(A,value,anchor=_A):return ScalarString.__new__(A,value,anchor=anchor)
+-class PlainScalarString(ScalarString):
+-	__slots__=();style=''
+-	def __new__(A,value,anchor=_A):return ScalarString.__new__(A,value,anchor=anchor)
+-def preserve_literal(s):return LiteralScalarString(s.replace('\r\n',_C).replace('\r',_C))
+-def walk_tree(base,map=_A):
+-	A=base;from dynaconf.vendor.ruamel.yaml.compat import string_types as E,MutableMapping as G,MutableSequence as H
+-	if map is _A:map={_C:preserve_literal}
+-	if isinstance(A,G):
+-		for F in A:
+-			C=A[F]
+-			if isinstance(C,E):
+-				for B in map:
+-					if B in C:A[F]=map[B](C);break
+-			else:walk_tree(C)
+-	elif isinstance(A,H):
+-		for (I,D) in enumerate(A):
+-			if isinstance(D,E):
+-				for B in map:
+-					if B in D:A[I]=map[B](D);break
+-			else:walk_tree(D)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/scanner.py b/dynaconf/vendor/ruamel/yaml/scanner.py
+deleted file mode 100644
+index dfbd1b4..0000000
+--- a/dynaconf/vendor/ruamel/yaml/scanner.py
++++ /dev/null
+@@ -1,602 +0,0 @@
+-from __future__ import print_function,absolute_import,division,unicode_literals
+-_o='\u2028\u2029'
+-_n='\r\n'
+-_m='\r\n\x85'
+-_l='while scanning a quoted scalar'
+-_k='0123456789ABCDEFabcdef'
+-_j=' \r\n\x85\u2028\u2029'
+-_i='\x07'
+-_h='expected a comment or a line break, but found %r'
+-_g='directive'
+-_f='\ufeff'
+-_e="could not find expected ':'"
+-_d='while scanning a simple key'
+-_c='typ'
+-_b='\\'
+-_a='\t'
+-_Z="expected ' ', but found %r"
+-_Y='while scanning a %s'
+-_X='\r\n\x85\u2028\u2029'
+-_W='while scanning a block scalar'
+-_V='expected alphabetic or numeric character, but found %r'
+-_U='a'
+-_T='...'
+-_S='---'
+-_R='>'
+-_Q='9'
+-_P='"'
+-_O=':'
+-_N='-'
+-_M=' \t'
+-_L='\x00 \r\n\x85\u2028\u2029'
+-_K='0'
+-_J="'"
+-_I='\x00'
+-_H='!'
+-_G='while scanning a directive'
+-_F='#'
+-_E='\n'
+-_D=' '
+-_C=None
+-_B=False
+-_A=True
+-from .error import MarkedYAMLError
+-from .tokens import *
+-from .compat import utf8,unichr,PY3,check_anchorname_char,nprint
+-if _B:from typing import Any,Dict,Optional,List,Union,Text;from .compat import VersionType
+-__all__=['Scanner','RoundTripScanner','ScannerError']
+-_THE_END='\n\x00\r\x85\u2028\u2029'
+-_THE_END_SPACE_TAB=' \n\x00\t\r\x85\u2028\u2029'
+-_SPACE_TAB=_M
+-class ScannerError(MarkedYAMLError):0
+-class SimpleKey:
+-	def __init__(self,token_number,required,index,line,column,mark):self.token_number=token_number;self.required=required;self.index=index;self.line=line;self.column=column;self.mark=mark
+-class Scanner:
+-	def __init__(self,loader=_C):
+-		self.loader=loader
+-		if self.loader is not _C and getattr(self.loader,'_scanner',_C)is _C:self.loader._scanner=self
+-		self.reset_scanner();self.first_time=_B;self.yaml_version=_C
+-	@property
+-	def flow_level(self):return len(self.flow_context)
+-	def reset_scanner(self):self.done=_B;self.flow_context=[];self.tokens=[];self.fetch_stream_start();self.tokens_taken=0;self.indent=-1;self.indents=[];self.allow_simple_key=_A;self.possible_simple_keys={}
+-	@property
+-	def reader(self):
+-		try:return self._scanner_reader
+-		except AttributeError:
+-			if hasattr(self.loader,_c):self._scanner_reader=self.loader.reader
+-			else:self._scanner_reader=self.loader._reader
+-			return self._scanner_reader
+-	@property
+-	def scanner_processing_version(self):
+-		if hasattr(self.loader,_c):return self.loader.resolver.processing_version
+-		return self.loader.processing_version
+-	def check_token(self,*choices):
+-		while self.need_more_tokens():self.fetch_more_tokens()
+-		if bool(self.tokens):
+-			if not choices:return _A
+-			for choice in choices:
+-				if isinstance(self.tokens[0],choice):return _A
+-		return _B
+-	def peek_token(self):
+-		while self.need_more_tokens():self.fetch_more_tokens()
+-		if bool(self.tokens):return self.tokens[0]
+-	def get_token(self):
+-		while self.need_more_tokens():self.fetch_more_tokens()
+-		if bool(self.tokens):self.tokens_taken+=1;return self.tokens.pop(0)
+-	def need_more_tokens(self):
+-		if self.done:return _B
+-		if not self.tokens:return _A
+-		self.stale_possible_simple_keys()
+-		if self.next_possible_simple_key()==self.tokens_taken:return _A
+-		return _B
+-	def fetch_comment(self,comment):raise NotImplementedError
+-	def fetch_more_tokens(self):
+-		comment=self.scan_to_next_token()
+-		if comment is not _C:return self.fetch_comment(comment)
+-		self.stale_possible_simple_keys();self.unwind_indent(self.reader.column);ch=self.reader.peek()
+-		if ch==_I:return self.fetch_stream_end()
+-		if ch=='%'and self.check_directive():return self.fetch_directive()
+-		if ch==_N and self.check_document_start():return self.fetch_document_start()
+-		if ch=='.'and self.check_document_end():return self.fetch_document_end()
+-		if ch=='[':return self.fetch_flow_sequence_start()
+-		if ch=='{':return self.fetch_flow_mapping_start()
+-		if ch==']':return self.fetch_flow_sequence_end()
+-		if ch=='}':return self.fetch_flow_mapping_end()
+-		if ch==',':return self.fetch_flow_entry()
+-		if ch==_N and self.check_block_entry():return self.fetch_block_entry()
+-		if ch=='?'and self.check_key():return self.fetch_key()
+-		if ch==_O and self.check_value():return self.fetch_value()
+-		if ch=='*':return self.fetch_alias()
+-		if ch=='&':return self.fetch_anchor()
+-		if ch==_H:return self.fetch_tag()
+-		if ch=='|'and not self.flow_level:return self.fetch_literal()
+-		if ch==_R and not self.flow_level:return self.fetch_folded()
+-		if ch==_J:return self.fetch_single()
+-		if ch==_P:return self.fetch_double()
+-		if self.check_plain():return self.fetch_plain()
+-		raise ScannerError('while scanning for the next token',_C,'found character %r that cannot start any token'%utf8(ch),self.reader.get_mark())
+-	def next_possible_simple_key(self):
+-		min_token_number=_C
+-		for level in self.possible_simple_keys:
+-			key=self.possible_simple_keys[level]
+-			if min_token_number is _C or key.token_number<min_token_number:min_token_number=key.token_number
+-		return min_token_number
+-	def stale_possible_simple_keys(self):
+-		for level in list(self.possible_simple_keys):
+-			key=self.possible_simple_keys[level]
+-			if key.line!=self.reader.line or self.reader.index-key.index>1024:
+-				if key.required:raise ScannerError(_d,key.mark,_e,self.reader.get_mark())
+-				del self.possible_simple_keys[level]
+-	def save_possible_simple_key(self):
+-		required=not self.flow_level and self.indent==self.reader.column
+-		if self.allow_simple_key:self.remove_possible_simple_key();token_number=self.tokens_taken+len(self.tokens);key=SimpleKey(token_number,required,self.reader.index,self.reader.line,self.reader.column,self.reader.get_mark());self.possible_simple_keys[self.flow_level]=key
+-	def remove_possible_simple_key(self):
+-		if self.flow_level in self.possible_simple_keys:
+-			key=self.possible_simple_keys[self.flow_level]
+-			if key.required:raise ScannerError(_d,key.mark,_e,self.reader.get_mark())
+-			del self.possible_simple_keys[self.flow_level]
+-	def unwind_indent(self,column):
+-		if bool(self.flow_level):return
+-		while self.indent>column:mark=self.reader.get_mark();self.indent=self.indents.pop();self.tokens.append(BlockEndToken(mark,mark))
+-	def add_indent(self,column):
+-		if self.indent<column:self.indents.append(self.indent);self.indent=column;return _A
+-		return _B
+-	def fetch_stream_start(self):mark=self.reader.get_mark();self.tokens.append(StreamStartToken(mark,mark,encoding=self.reader.encoding))
+-	def fetch_stream_end(self):self.unwind_indent(-1);self.remove_possible_simple_key();self.allow_simple_key=_B;self.possible_simple_keys={};mark=self.reader.get_mark();self.tokens.append(StreamEndToken(mark,mark));self.done=_A
+-	def fetch_directive(self):self.unwind_indent(-1);self.remove_possible_simple_key();self.allow_simple_key=_B;self.tokens.append(self.scan_directive())
+-	def fetch_document_start(self):self.fetch_document_indicator(DocumentStartToken)
+-	def fetch_document_end(self):self.fetch_document_indicator(DocumentEndToken)
+-	def fetch_document_indicator(self,TokenClass):self.unwind_indent(-1);self.remove_possible_simple_key();self.allow_simple_key=_B;start_mark=self.reader.get_mark();self.reader.forward(3);end_mark=self.reader.get_mark();self.tokens.append(TokenClass(start_mark,end_mark))
+-	def fetch_flow_sequence_start(self):self.fetch_flow_collection_start(FlowSequenceStartToken,to_push='[')
+-	def fetch_flow_mapping_start(self):self.fetch_flow_collection_start(FlowMappingStartToken,to_push='{')
+-	def fetch_flow_collection_start(self,TokenClass,to_push):self.save_possible_simple_key();self.flow_context.append(to_push);self.allow_simple_key=_A;start_mark=self.reader.get_mark();self.reader.forward();end_mark=self.reader.get_mark();self.tokens.append(TokenClass(start_mark,end_mark))
+-	def fetch_flow_sequence_end(self):self.fetch_flow_collection_end(FlowSequenceEndToken)
+-	def fetch_flow_mapping_end(self):self.fetch_flow_collection_end(FlowMappingEndToken)
+-	def fetch_flow_collection_end(self,TokenClass):
+-		self.remove_possible_simple_key()
+-		try:popped=self.flow_context.pop()
+-		except IndexError:pass
+-		self.allow_simple_key=_B;start_mark=self.reader.get_mark();self.reader.forward();end_mark=self.reader.get_mark();self.tokens.append(TokenClass(start_mark,end_mark))
+-	def fetch_flow_entry(self):self.allow_simple_key=_A;self.remove_possible_simple_key();start_mark=self.reader.get_mark();self.reader.forward();end_mark=self.reader.get_mark();self.tokens.append(FlowEntryToken(start_mark,end_mark))
+-	def fetch_block_entry(self):
+-		if not self.flow_level:
+-			if not self.allow_simple_key:raise ScannerError(_C,_C,'sequence entries are not allowed here',self.reader.get_mark())
+-			if self.add_indent(self.reader.column):mark=self.reader.get_mark();self.tokens.append(BlockSequenceStartToken(mark,mark))
+-		else:0
+-		self.allow_simple_key=_A;self.remove_possible_simple_key();start_mark=self.reader.get_mark();self.reader.forward();end_mark=self.reader.get_mark();self.tokens.append(BlockEntryToken(start_mark,end_mark))
+-	def fetch_key(self):
+-		if not self.flow_level:
+-			if not self.allow_simple_key:raise ScannerError(_C,_C,'mapping keys are not allowed here',self.reader.get_mark())
+-			if self.add_indent(self.reader.column):mark=self.reader.get_mark();self.tokens.append(BlockMappingStartToken(mark,mark))
+-		self.allow_simple_key=not self.flow_level;self.remove_possible_simple_key();start_mark=self.reader.get_mark();self.reader.forward();end_mark=self.reader.get_mark();self.tokens.append(KeyToken(start_mark,end_mark))
+-	def fetch_value(self):
+-		if self.flow_level in self.possible_simple_keys:
+-			key=self.possible_simple_keys[self.flow_level];del self.possible_simple_keys[self.flow_level];self.tokens.insert(key.token_number-self.tokens_taken,KeyToken(key.mark,key.mark))
+-			if not self.flow_level:
+-				if self.add_indent(key.column):self.tokens.insert(key.token_number-self.tokens_taken,BlockMappingStartToken(key.mark,key.mark))
+-			self.allow_simple_key=_B
+-		else:
+-			if not self.flow_level:
+-				if not self.allow_simple_key:raise ScannerError(_C,_C,'mapping values are not allowed here',self.reader.get_mark())
+-			if not self.flow_level:
+-				if self.add_indent(self.reader.column):mark=self.reader.get_mark();self.tokens.append(BlockMappingStartToken(mark,mark))
+-			self.allow_simple_key=not self.flow_level;self.remove_possible_simple_key()
+-		start_mark=self.reader.get_mark();self.reader.forward();end_mark=self.reader.get_mark();self.tokens.append(ValueToken(start_mark,end_mark))
+-	def fetch_alias(self):self.save_possible_simple_key();self.allow_simple_key=_B;self.tokens.append(self.scan_anchor(AliasToken))
+-	def fetch_anchor(self):self.save_possible_simple_key();self.allow_simple_key=_B;self.tokens.append(self.scan_anchor(AnchorToken))
+-	def fetch_tag(self):self.save_possible_simple_key();self.allow_simple_key=_B;self.tokens.append(self.scan_tag())
+-	def fetch_literal(self):self.fetch_block_scalar(style='|')
+-	def fetch_folded(self):self.fetch_block_scalar(style=_R)
+-	def fetch_block_scalar(self,style):self.allow_simple_key=_A;self.remove_possible_simple_key();self.tokens.append(self.scan_block_scalar(style))
+-	def fetch_single(self):self.fetch_flow_scalar(style=_J)
+-	def fetch_double(self):self.fetch_flow_scalar(style=_P)
+-	def fetch_flow_scalar(self,style):self.save_possible_simple_key();self.allow_simple_key=_B;self.tokens.append(self.scan_flow_scalar(style))
+-	def fetch_plain(self):self.save_possible_simple_key();self.allow_simple_key=_B;self.tokens.append(self.scan_plain())
+-	def check_directive(self):
+-		if self.reader.column==0:return _A
+-		return _C
+-	def check_document_start(self):
+-		if self.reader.column==0:
+-			if self.reader.prefix(3)==_S and self.reader.peek(3)in _THE_END_SPACE_TAB:return _A
+-		return _C
+-	def check_document_end(self):
+-		if self.reader.column==0:
+-			if self.reader.prefix(3)==_T and self.reader.peek(3)in _THE_END_SPACE_TAB:return _A
+-		return _C
+-	def check_block_entry(self):return self.reader.peek(1)in _THE_END_SPACE_TAB
+-	def check_key(self):
+-		if bool(self.flow_level):return _A
+-		return self.reader.peek(1)in _THE_END_SPACE_TAB
+-	def check_value(self):
+-		if self.scanner_processing_version==(1,1):
+-			if bool(self.flow_level):return _A
+-		elif bool(self.flow_level):
+-			if self.flow_context[-1]=='[':
+-				if self.reader.peek(1)not in _THE_END_SPACE_TAB:return _B
+-			elif self.tokens and isinstance(self.tokens[-1],ValueToken):
+-				if self.reader.peek(1)not in _THE_END_SPACE_TAB:return _B
+-			return _A
+-		return self.reader.peek(1)in _THE_END_SPACE_TAB
+-	def check_plain(self):
+-		B='?:';A='\x00 \t\r\n\x85\u2028\u2029-?:,[]{}#&*!|>\'"%@`';srp=self.reader.peek;ch=srp()
+-		if self.scanner_processing_version==(1,1):return ch not in A or srp(1)not in _THE_END_SPACE_TAB and(ch==_N or not self.flow_level and ch in B)
+-		if ch not in A:return _A
+-		ch1=srp(1)
+-		if ch==_N and ch1 not in _THE_END_SPACE_TAB:return _A
+-		if ch==_O and bool(self.flow_level)and ch1 not in _SPACE_TAB:return _A
+-		return srp(1)not in _THE_END_SPACE_TAB and(ch==_N or not self.flow_level and ch in B)
+-	def scan_to_next_token(self):
+-		srp=self.reader.peek;srf=self.reader.forward
+-		if self.reader.index==0 and srp()==_f:srf()
+-		found=_B;_the_end=_THE_END
+-		while not found:
+-			while srp()==_D:srf()
+-			if srp()==_F:
+-				while srp()not in _the_end:srf()
+-			if self.scan_line_break():
+-				if not self.flow_level:self.allow_simple_key=_A
+-			else:found=_A
+-		return _C
+-	def scan_directive(self):
+-		srp=self.reader.peek;srf=self.reader.forward;start_mark=self.reader.get_mark();srf();name=self.scan_directive_name(start_mark);value=_C
+-		if name=='YAML':value=self.scan_yaml_directive_value(start_mark);end_mark=self.reader.get_mark()
+-		elif name=='TAG':value=self.scan_tag_directive_value(start_mark);end_mark=self.reader.get_mark()
+-		else:
+-			end_mark=self.reader.get_mark()
+-			while srp()not in _THE_END:srf()
+-		self.scan_directive_ignored_line(start_mark);return DirectiveToken(name,value,start_mark,end_mark)
+-	def scan_directive_name(self,start_mark):
+-		length=0;srp=self.reader.peek;ch=srp(length)
+-		while _K<=ch<=_Q or'A'<=ch<='Z'or _U<=ch<='z'or ch in'-_:.':length+=1;ch=srp(length)
+-		if not length:raise ScannerError(_G,start_mark,_V%utf8(ch),self.reader.get_mark())
+-		value=self.reader.prefix(length);self.reader.forward(length);ch=srp()
+-		if ch not in _L:raise ScannerError(_G,start_mark,_V%utf8(ch),self.reader.get_mark())
+-		return value
+-	def scan_yaml_directive_value(self,start_mark):
+-		srp=self.reader.peek;srf=self.reader.forward
+-		while srp()==_D:srf()
+-		major=self.scan_yaml_directive_number(start_mark)
+-		if srp()!='.':raise ScannerError(_G,start_mark,"expected a digit or '.', but found %r"%utf8(srp()),self.reader.get_mark())
+-		srf();minor=self.scan_yaml_directive_number(start_mark)
+-		if srp()not in _L:raise ScannerError(_G,start_mark,"expected a digit or ' ', but found %r"%utf8(srp()),self.reader.get_mark())
+-		self.yaml_version=major,minor;return self.yaml_version
+-	def scan_yaml_directive_number(self,start_mark):
+-		srp=self.reader.peek;srf=self.reader.forward;ch=srp()
+-		if not _K<=ch<=_Q:raise ScannerError(_G,start_mark,'expected a digit, but found %r'%utf8(ch),self.reader.get_mark())
+-		length=0
+-		while _K<=srp(length)<=_Q:length+=1
+-		value=int(self.reader.prefix(length));srf(length);return value
+-	def scan_tag_directive_value(self,start_mark):
+-		srp=self.reader.peek;srf=self.reader.forward
+-		while srp()==_D:srf()
+-		handle=self.scan_tag_directive_handle(start_mark)
+-		while srp()==_D:srf()
+-		prefix=self.scan_tag_directive_prefix(start_mark);return handle,prefix
+-	def scan_tag_directive_handle(self,start_mark):
+-		value=self.scan_tag_handle(_g,start_mark);ch=self.reader.peek()
+-		if ch!=_D:raise ScannerError(_G,start_mark,_Z%utf8(ch),self.reader.get_mark())
+-		return value
+-	def scan_tag_directive_prefix(self,start_mark):
+-		value=self.scan_tag_uri(_g,start_mark);ch=self.reader.peek()
+-		if ch not in _L:raise ScannerError(_G,start_mark,_Z%utf8(ch),self.reader.get_mark())
+-		return value
+-	def scan_directive_ignored_line(self,start_mark):
+-		srp=self.reader.peek;srf=self.reader.forward
+-		while srp()==_D:srf()
+-		if srp()==_F:
+-			while srp()not in _THE_END:srf()
+-		ch=srp()
+-		if ch not in _THE_END:raise ScannerError(_G,start_mark,_h%utf8(ch),self.reader.get_mark())
+-		self.scan_line_break()
+-	def scan_anchor(self,TokenClass):
+-		A='while scanning an %s';srp=self.reader.peek;start_mark=self.reader.get_mark();indicator=srp()
+-		if indicator=='*':name='alias'
+-		else:name='anchor'
+-		self.reader.forward();length=0;ch=srp(length)
+-		while check_anchorname_char(ch):length+=1;ch=srp(length)
+-		if not length:raise ScannerError(A%(name,),start_mark,_V%utf8(ch),self.reader.get_mark())
+-		value=self.reader.prefix(length);self.reader.forward(length)
+-		if ch not in'\x00 \t\r\n\x85\u2028\u2029?:,[]{}%@`':raise ScannerError(A%(name,),start_mark,_V%utf8(ch),self.reader.get_mark())
+-		end_mark=self.reader.get_mark();return TokenClass(value,start_mark,end_mark)
+-	def scan_tag(self):
+-		A='tag';srp=self.reader.peek;start_mark=self.reader.get_mark();ch=srp(1)
+-		if ch=='<':
+-			handle=_C;self.reader.forward(2);suffix=self.scan_tag_uri(A,start_mark)
+-			if srp()!=_R:raise ScannerError('while parsing a tag',start_mark,"expected '>', but found %r"%utf8(srp()),self.reader.get_mark())
+-			self.reader.forward()
+-		elif ch in _THE_END_SPACE_TAB:handle=_C;suffix=_H;self.reader.forward()
+-		else:
+-			length=1;use_handle=_B
+-			while ch not in _L:
+-				if ch==_H:use_handle=_A;break
+-				length+=1;ch=srp(length)
+-			handle=_H
+-			if use_handle:handle=self.scan_tag_handle(A,start_mark)
+-			else:handle=_H;self.reader.forward()
+-			suffix=self.scan_tag_uri(A,start_mark)
+-		ch=srp()
+-		if ch not in _L:raise ScannerError('while scanning a tag',start_mark,_Z%utf8(ch),self.reader.get_mark())
+-		value=handle,suffix;end_mark=self.reader.get_mark();return TagToken(value,start_mark,end_mark)
+-	def scan_block_scalar(self,style,rt=_B):
+-		A='|>';srp=self.reader.peek
+-		if style==_R:folded=_A
+-		else:folded=_B
+-		chunks=[];start_mark=self.reader.get_mark();self.reader.forward();chomping,increment=self.scan_block_scalar_indicators(start_mark);block_scalar_comment=self.scan_block_scalar_ignored_line(start_mark);min_indent=self.indent+1
+-		if increment is _C:
+-			if min_indent<1 and(style not in A or self.scanner_processing_version==(1,1)and getattr(self.loader,'top_level_block_style_scalar_no_indent_error_1_1',_B)):min_indent=1
+-			breaks,max_indent,end_mark=self.scan_block_scalar_indentation();indent=max(min_indent,max_indent)
+-		else:
+-			if min_indent<1:min_indent=1
+-			indent=min_indent+increment-1;breaks,end_mark=self.scan_block_scalar_breaks(indent)
+-		line_break=''
+-		while self.reader.column==indent and srp()!=_I:
+-			chunks.extend(breaks);leading_non_space=srp()not in _M;length=0
+-			while srp(length)not in _THE_END:length+=1
+-			chunks.append(self.reader.prefix(length));self.reader.forward(length);line_break=self.scan_line_break();breaks,end_mark=self.scan_block_scalar_breaks(indent)
+-			if style in A and min_indent==0:
+-				if self.check_document_start()or self.check_document_end():break
+-			if self.reader.column==indent and srp()!=_I:
+-				if rt and folded and line_break==_E:chunks.append(_i)
+-				if folded and line_break==_E and leading_non_space and srp()not in _M:
+-					if not breaks:chunks.append(_D)
+-				else:chunks.append(line_break)
+-			else:break
+-		trailing=[]
+-		if chomping in[_C,_A]:chunks.append(line_break)
+-		if chomping is _A:chunks.extend(breaks)
+-		elif chomping in[_C,_B]:trailing.extend(breaks)
+-		token=ScalarToken(''.join(chunks),_B,start_mark,end_mark,style)
+-		if block_scalar_comment is not _C:token.add_pre_comments([block_scalar_comment])
+-		if len(trailing)>0:
+-			comment=self.scan_to_next_token()
+-			while comment:trailing.append(_D*comment[1].column+comment[0]);comment=self.scan_to_next_token()
+-			comment_end_mark=self.reader.get_mark();comment=CommentToken(''.join(trailing),end_mark,comment_end_mark);token.add_post_comment(comment)
+-		return token
+-	def scan_block_scalar_indicators(self,start_mark):
+-		D='expected indentation indicator in the range 1-9, but found 0';C='0123456789';B='+';A='+-';srp=self.reader.peek;chomping=_C;increment=_C;ch=srp()
+-		if ch in A:
+-			if ch==B:chomping=_A
+-			else:chomping=_B
+-			self.reader.forward();ch=srp()
+-			if ch in C:
+-				increment=int(ch)
+-				if increment==0:raise ScannerError(_W,start_mark,D,self.reader.get_mark())
+-				self.reader.forward()
+-		elif ch in C:
+-			increment=int(ch)
+-			if increment==0:raise ScannerError(_W,start_mark,D,self.reader.get_mark())
+-			self.reader.forward();ch=srp()
+-			if ch in A:
+-				if ch==B:chomping=_A
+-				else:chomping=_B
+-				self.reader.forward()
+-		ch=srp()
+-		if ch not in _L:raise ScannerError(_W,start_mark,'expected chomping or indentation indicators, but found %r'%utf8(ch),self.reader.get_mark())
+-		return chomping,increment
+-	def scan_block_scalar_ignored_line(self,start_mark):
+-		srp=self.reader.peek;srf=self.reader.forward;prefix='';comment=_C
+-		while srp()==_D:prefix+=srp();srf()
+-		if srp()==_F:
+-			comment=prefix
+-			while srp()not in _THE_END:comment+=srp();srf()
+-		ch=srp()
+-		if ch not in _THE_END:raise ScannerError(_W,start_mark,_h%utf8(ch),self.reader.get_mark())
+-		self.scan_line_break();return comment
+-	def scan_block_scalar_indentation(self):
+-		srp=self.reader.peek;srf=self.reader.forward;chunks=[];max_indent=0;end_mark=self.reader.get_mark()
+-		while srp()in _j:
+-			if srp()!=_D:chunks.append(self.scan_line_break());end_mark=self.reader.get_mark()
+-			else:
+-				srf()
+-				if self.reader.column>max_indent:max_indent=self.reader.column
+-		return chunks,max_indent,end_mark
+-	def scan_block_scalar_breaks(self,indent):
+-		chunks=[];srp=self.reader.peek;srf=self.reader.forward;end_mark=self.reader.get_mark()
+-		while self.reader.column<indent and srp()==_D:srf()
+-		while srp()in _X:
+-			chunks.append(self.scan_line_break());end_mark=self.reader.get_mark()
+-			while self.reader.column<indent and srp()==_D:srf()
+-		return chunks,end_mark
+-	def scan_flow_scalar(self,style):
+-		if style==_P:double=_A
+-		else:double=_B
+-		srp=self.reader.peek;chunks=[];start_mark=self.reader.get_mark();quote=srp();self.reader.forward();chunks.extend(self.scan_flow_scalar_non_spaces(double,start_mark))
+-		while srp()!=quote:chunks.extend(self.scan_flow_scalar_spaces(double,start_mark));chunks.extend(self.scan_flow_scalar_non_spaces(double,start_mark))
+-		self.reader.forward();end_mark=self.reader.get_mark();return ScalarToken(''.join(chunks),_B,start_mark,end_mark,style)
+-	ESCAPE_REPLACEMENTS={_K:_I,_U:_i,'b':'\x08','t':_a,_a:_a,'n':_E,'v':'\x0b','f':'\x0c','r':'\r','e':'\x1b',_D:_D,_P:_P,'/':'/',_b:_b,'N':'\x85','_':'\xa0','L':'\u2028','P':'\u2029'};ESCAPE_CODES={'x':2,'u':4,'U':8}
+-	def scan_flow_scalar_non_spaces(self,double,start_mark):
+-		A='while scanning a double-quoted scalar';chunks=[];srp=self.reader.peek;srf=self.reader.forward
+-		while _A:
+-			length=0
+-			while srp(length)not in' \n\'"\\\x00\t\r\x85\u2028\u2029':length+=1
+-			if length!=0:chunks.append(self.reader.prefix(length));srf(length)
+-			ch=srp()
+-			if not double and ch==_J and srp(1)==_J:chunks.append(_J);srf(2)
+-			elif double and ch==_J or not double and ch in'"\\':chunks.append(ch);srf()
+-			elif double and ch==_b:
+-				srf();ch=srp()
+-				if ch in self.ESCAPE_REPLACEMENTS:chunks.append(self.ESCAPE_REPLACEMENTS[ch]);srf()
+-				elif ch in self.ESCAPE_CODES:
+-					length=self.ESCAPE_CODES[ch];srf()
+-					for k in range(length):
+-						if srp(k)not in _k:raise ScannerError(A,start_mark,'expected escape sequence of %d hexdecimal numbers, but found %r'%(length,utf8(srp(k))),self.reader.get_mark())
+-					code=int(self.reader.prefix(length),16);chunks.append(unichr(code));srf(length)
+-				elif ch in'\n\r\x85\u2028\u2029':self.scan_line_break();chunks.extend(self.scan_flow_scalar_breaks(double,start_mark))
+-				else:raise ScannerError(A,start_mark,'found unknown escape character %r'%utf8(ch),self.reader.get_mark())
+-			else:return chunks
+-	def scan_flow_scalar_spaces(self,double,start_mark):
+-		srp=self.reader.peek;chunks=[];length=0
+-		while srp(length)in _M:length+=1
+-		whitespaces=self.reader.prefix(length);self.reader.forward(length);ch=srp()
+-		if ch==_I:raise ScannerError(_l,start_mark,'found unexpected end of stream',self.reader.get_mark())
+-		elif ch in _X:
+-			line_break=self.scan_line_break();breaks=self.scan_flow_scalar_breaks(double,start_mark)
+-			if line_break!=_E:chunks.append(line_break)
+-			elif not breaks:chunks.append(_D)
+-			chunks.extend(breaks)
+-		else:chunks.append(whitespaces)
+-		return chunks
+-	def scan_flow_scalar_breaks(self,double,start_mark):
+-		chunks=[];srp=self.reader.peek;srf=self.reader.forward
+-		while _A:
+-			prefix=self.reader.prefix(3)
+-			if(prefix==_S or prefix==_T)and srp(3)in _THE_END_SPACE_TAB:raise ScannerError(_l,start_mark,'found unexpected document separator',self.reader.get_mark())
+-			while srp()in _M:srf()
+-			if srp()in _X:chunks.append(self.scan_line_break())
+-			else:return chunks
+-	def scan_plain(self):
+-		srp=self.reader.peek;srf=self.reader.forward;chunks=[];start_mark=self.reader.get_mark();end_mark=start_mark;indent=self.indent+1;spaces=[]
+-		while _A:
+-			length=0
+-			if srp()==_F:break
+-			while _A:
+-				ch=srp(length)
+-				if ch==_O and srp(length+1)not in _THE_END_SPACE_TAB:0
+-				elif ch=='?'and self.scanner_processing_version!=(1,1):0
+-				elif ch in _THE_END_SPACE_TAB or not self.flow_level and ch==_O and srp(length+1)in _THE_END_SPACE_TAB or self.flow_level and ch in',:?[]{}':break
+-				length+=1
+-			if self.flow_level and ch==_O and srp(length+1)not in'\x00 \t\r\n\x85\u2028\u2029,[]{}':srf(length);raise ScannerError('while scanning a plain scalar',start_mark,"found unexpected ':'",self.reader.get_mark(),'Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details.')
+-			if length==0:break
+-			self.allow_simple_key=_B;chunks.extend(spaces);chunks.append(self.reader.prefix(length));srf(length);end_mark=self.reader.get_mark();spaces=self.scan_plain_spaces(indent,start_mark)
+-			if not spaces or srp()==_F or not self.flow_level and self.reader.column<indent:break
+-		token=ScalarToken(''.join(chunks),_A,start_mark,end_mark)
+-		if spaces and spaces[0]==_E:comment=CommentToken(''.join(spaces)+_E,start_mark,end_mark);token.add_post_comment(comment)
+-		return token
+-	def scan_plain_spaces(self,indent,start_mark):
+-		srp=self.reader.peek;srf=self.reader.forward;chunks=[];length=0
+-		while srp(length)in _D:length+=1
+-		whitespaces=self.reader.prefix(length);self.reader.forward(length);ch=srp()
+-		if ch in _X:
+-			line_break=self.scan_line_break();self.allow_simple_key=_A;prefix=self.reader.prefix(3)
+-			if(prefix==_S or prefix==_T)and srp(3)in _THE_END_SPACE_TAB:return
+-			breaks=[]
+-			while srp()in _j:
+-				if srp()==_D:srf()
+-				else:
+-					breaks.append(self.scan_line_break());prefix=self.reader.prefix(3)
+-					if(prefix==_S or prefix==_T)and srp(3)in _THE_END_SPACE_TAB:return
+-			if line_break!=_E:chunks.append(line_break)
+-			elif not breaks:chunks.append(_D)
+-			chunks.extend(breaks)
+-		elif whitespaces:chunks.append(whitespaces)
+-		return chunks
+-	def scan_tag_handle(self,name,start_mark):
+-		A="expected '!', but found %r";srp=self.reader.peek;ch=srp()
+-		if ch!=_H:raise ScannerError(_Y%(name,),start_mark,A%utf8(ch),self.reader.get_mark())
+-		length=1;ch=srp(length)
+-		if ch!=_D:
+-			while _K<=ch<=_Q or'A'<=ch<='Z'or _U<=ch<='z'or ch in'-_':length+=1;ch=srp(length)
+-			if ch!=_H:self.reader.forward(length);raise ScannerError(_Y%(name,),start_mark,A%utf8(ch),self.reader.get_mark())
+-			length+=1
+-		value=self.reader.prefix(length);self.reader.forward(length);return value
+-	def scan_tag_uri(self,name,start_mark):
+-		srp=self.reader.peek;chunks=[];length=0;ch=srp(length)
+-		while _K<=ch<=_Q or'A'<=ch<='Z'or _U<=ch<='z'or ch in"-;/?:@&=+$,_.!~*'()[]%"or self.scanner_processing_version>(1,1)and ch==_F:
+-			if ch=='%':chunks.append(self.reader.prefix(length));self.reader.forward(length);length=0;chunks.append(self.scan_uri_escapes(name,start_mark))
+-			else:length+=1
+-			ch=srp(length)
+-		if length!=0:chunks.append(self.reader.prefix(length));self.reader.forward(length);length=0
+-		if not chunks:raise ScannerError('while parsing a %s'%(name,),start_mark,'expected URI, but found %r'%utf8(ch),self.reader.get_mark())
+-		return ''.join(chunks)
+-	def scan_uri_escapes(self,name,start_mark):
+-		A='utf-8';srp=self.reader.peek;srf=self.reader.forward;code_bytes=[];mark=self.reader.get_mark()
+-		while srp()=='%':
+-			srf()
+-			for k in range(2):
+-				if srp(k)not in _k:raise ScannerError(_Y%(name,),start_mark,'expected URI escape sequence of 2 hexdecimal numbers, but found %r'%utf8(srp(k)),self.reader.get_mark())
+-			if PY3:code_bytes.append(int(self.reader.prefix(2),16))
+-			else:code_bytes.append(chr(int(self.reader.prefix(2),16)))
+-			srf(2)
+-		try:
+-			if PY3:value=bytes(code_bytes).decode(A)
+-			else:value=unicode(b''.join(code_bytes),A)
+-		except UnicodeDecodeError as exc:raise ScannerError(_Y%(name,),start_mark,str(exc),mark)
+-		return value
+-	def scan_line_break(self):
+-		ch=self.reader.peek()
+-		if ch in _m:
+-			if self.reader.prefix(2)==_n:self.reader.forward(2)
+-			else:self.reader.forward()
+-			return _E
+-		elif ch in _o:self.reader.forward();return ch
+-		return''
+-class RoundTripScanner(Scanner):
+-	def check_token(self,*choices):
+-		while self.need_more_tokens():self.fetch_more_tokens()
+-		self._gather_comments()
+-		if bool(self.tokens):
+-			if not choices:return _A
+-			for choice in choices:
+-				if isinstance(self.tokens[0],choice):return _A
+-		return _B
+-	def peek_token(self):
+-		while self.need_more_tokens():self.fetch_more_tokens()
+-		self._gather_comments()
+-		if bool(self.tokens):return self.tokens[0]
+-		return _C
+-	def _gather_comments(self):
+-		comments=[]
+-		if not self.tokens:return comments
+-		if isinstance(self.tokens[0],CommentToken):comment=self.tokens.pop(0);self.tokens_taken+=1;comments.append(comment)
+-		while self.need_more_tokens():
+-			self.fetch_more_tokens()
+-			if not self.tokens:return comments
+-			if isinstance(self.tokens[0],CommentToken):self.tokens_taken+=1;comment=self.tokens.pop(0);comments.append(comment)
+-		if len(comments)>=1:self.tokens[0].add_pre_comments(comments)
+-		if not self.done and len(self.tokens)<2:self.fetch_more_tokens()
+-	def get_token(self):
+-		while self.need_more_tokens():self.fetch_more_tokens()
+-		self._gather_comments()
+-		if bool(self.tokens):
+-			if len(self.tokens)>1 and isinstance(self.tokens[0],(ScalarToken,ValueToken,FlowSequenceEndToken,FlowMappingEndToken))and isinstance(self.tokens[1],CommentToken)and self.tokens[0].end_mark.line==self.tokens[1].start_mark.line:
+-				self.tokens_taken+=1;c=self.tokens.pop(1);self.fetch_more_tokens()
+-				while len(self.tokens)>1 and isinstance(self.tokens[1],CommentToken):self.tokens_taken+=1;c1=self.tokens.pop(1);c.value=c.value+_D*c1.start_mark.column+c1.value;self.fetch_more_tokens()
+-				self.tokens[0].add_post_comment(c)
+-			elif len(self.tokens)>1 and isinstance(self.tokens[0],ScalarToken)and isinstance(self.tokens[1],CommentToken)and self.tokens[0].end_mark.line!=self.tokens[1].start_mark.line:
+-				self.tokens_taken+=1;c=self.tokens.pop(1);c.value=_E*(c.start_mark.line-self.tokens[0].end_mark.line)+_D*c.start_mark.column+c.value;self.tokens[0].add_post_comment(c);self.fetch_more_tokens()
+-				while len(self.tokens)>1 and isinstance(self.tokens[1],CommentToken):self.tokens_taken+=1;c1=self.tokens.pop(1);c.value=c.value+_D*c1.start_mark.column+c1.value;self.fetch_more_tokens()
+-			self.tokens_taken+=1;return self.tokens.pop(0)
+-		return _C
+-	def fetch_comment(self,comment):
+-		value,start_mark,end_mark=comment
+-		while value and value[-1]==_D:value=value[:-1]
+-		self.tokens.append(CommentToken(value,start_mark,end_mark))
+-	def scan_to_next_token(self):
+-		srp=self.reader.peek;srf=self.reader.forward
+-		if self.reader.index==0 and srp()==_f:srf()
+-		found=_B
+-		while not found:
+-			while srp()==_D:srf()
+-			ch=srp()
+-			if ch==_F:
+-				start_mark=self.reader.get_mark();comment=ch;srf()
+-				while ch not in _THE_END:
+-					ch=srp()
+-					if ch==_I:comment+=_E;break
+-					comment+=ch;srf()
+-				ch=self.scan_line_break()
+-				while len(ch)>0:comment+=ch;ch=self.scan_line_break()
+-				end_mark=self.reader.get_mark()
+-				if not self.flow_level:self.allow_simple_key=_A
+-				return comment,start_mark,end_mark
+-			if bool(self.scan_line_break()):
+-				start_mark=self.reader.get_mark()
+-				if not self.flow_level:self.allow_simple_key=_A
+-				ch=srp()
+-				if ch==_E:
+-					start_mark=self.reader.get_mark();comment=''
+-					while ch:ch=self.scan_line_break(empty_line=_A);comment+=ch
+-					if srp()==_F:comment=comment.rsplit(_E,1)[0]+_E
+-					end_mark=self.reader.get_mark();return comment,start_mark,end_mark
+-			else:found=_A
+-		return _C
+-	def scan_line_break(self,empty_line=_B):
+-		ch=self.reader.peek()
+-		if ch in _m:
+-			if self.reader.prefix(2)==_n:self.reader.forward(2)
+-			else:self.reader.forward()
+-			return _E
+-		elif ch in _o:self.reader.forward();return ch
+-		elif empty_line and ch in'\t ':self.reader.forward();return ch
+-		return''
+-	def scan_block_scalar(self,style,rt=_A):return Scanner.scan_block_scalar(self,style,rt=rt)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/serializer.py b/dynaconf/vendor/ruamel/yaml/serializer.py
+deleted file mode 100644
+index 158ca05..0000000
+--- a/dynaconf/vendor/ruamel/yaml/serializer.py
++++ /dev/null
+@@ -1,91 +0,0 @@
+-from __future__ import absolute_import
+-_F='serializer is not opened'
+-_E='serializer is closed'
+-_D='typ'
+-_C=False
+-_B=True
+-_A=None
+-from .error import YAMLError
+-from .compat import nprint,DBG_NODE,dbg,string_types,nprintf
+-from .util import RegExp
+-from .events import StreamStartEvent,StreamEndEvent,MappingStartEvent,MappingEndEvent,SequenceStartEvent,SequenceEndEvent,AliasEvent,ScalarEvent,DocumentStartEvent,DocumentEndEvent
+-from .nodes import MappingNode,ScalarNode,SequenceNode
+-if _C:from typing import Any,Dict,Union,Text,Optional;from .compat import VersionType
+-__all__=['Serializer','SerializerError']
+-class SerializerError(YAMLError):0
+-class Serializer:
+-	ANCHOR_TEMPLATE='id%03d';ANCHOR_RE=RegExp('id(?!000$)\\d{3,}')
+-	def __init__(A,encoding=_A,explicit_start=_A,explicit_end=_A,version=_A,tags=_A,dumper=_A):
+-		B=version;A.dumper=dumper
+-		if A.dumper is not _A:A.dumper._serializer=A
+-		A.use_encoding=encoding;A.use_explicit_start=explicit_start;A.use_explicit_end=explicit_end
+-		if isinstance(B,string_types):A.use_version=tuple(map(int,B.split('.')))
+-		else:A.use_version=B
+-		A.use_tags=tags;A.serialized_nodes={};A.anchors={};A.last_anchor_id=0;A.closed=_A;A._templated_id=_A
+-	@property
+-	def emitter(self):
+-		A=self
+-		if hasattr(A.dumper,_D):return A.dumper.emitter
+-		return A.dumper._emitter
+-	@property
+-	def resolver(self):
+-		A=self
+-		if hasattr(A.dumper,_D):A.dumper.resolver
+-		return A.dumper._resolver
+-	def open(A):
+-		if A.closed is _A:A.emitter.emit(StreamStartEvent(encoding=A.use_encoding));A.closed=_C
+-		elif A.closed:raise SerializerError(_E)
+-		else:raise SerializerError('serializer is already opened')
+-	def close(A):
+-		if A.closed is _A:raise SerializerError(_F)
+-		elif not A.closed:A.emitter.emit(StreamEndEvent());A.closed=_B
+-	def serialize(A,node):
+-		B=node
+-		if dbg(DBG_NODE):nprint('Serializing nodes');B.dump()
+-		if A.closed is _A:raise SerializerError(_F)
+-		elif A.closed:raise SerializerError(_E)
+-		A.emitter.emit(DocumentStartEvent(explicit=A.use_explicit_start,version=A.use_version,tags=A.use_tags));A.anchor_node(B);A.serialize_node(B,_A,_A);A.emitter.emit(DocumentEndEvent(explicit=A.use_explicit_end));A.serialized_nodes={};A.anchors={};A.last_anchor_id=0
+-	def anchor_node(B,node):
+-		A=node
+-		if A in B.anchors:
+-			if B.anchors[A]is _A:B.anchors[A]=B.generate_anchor(A)
+-		else:
+-			C=_A
+-			try:
+-				if A.anchor.always_dump:C=A.anchor.value
+-			except:pass
+-			B.anchors[A]=C
+-			if isinstance(A,SequenceNode):
+-				for D in A.value:B.anchor_node(D)
+-			elif isinstance(A,MappingNode):
+-				for (E,F) in A.value:B.anchor_node(E);B.anchor_node(F)
+-	def generate_anchor(A,node):
+-		try:B=node.anchor.value
+-		except:B=_A
+-		if B is _A:A.last_anchor_id+=1;return A.ANCHOR_TEMPLATE%A.last_anchor_id
+-		return B
+-	def serialize_node(B,node,parent,index):
+-		F=index;A=node;G=B.anchors[A]
+-		if A in B.serialized_nodes:B.emitter.emit(AliasEvent(G))
+-		else:
+-			B.serialized_nodes[A]=_B;B.resolver.descend_resolver(parent,F)
+-			if isinstance(A,ScalarNode):K=B.resolver.resolve(ScalarNode,A.value,(_B,_C));L=B.resolver.resolve(ScalarNode,A.value,(_C,_B));E=A.tag==K,A.tag==L,A.tag.startswith('tag:yaml.org,2002:');B.emitter.emit(ScalarEvent(G,A.tag,E,A.value,style=A.style,comment=A.comment))
+-			elif isinstance(A,SequenceNode):
+-				E=A.tag==B.resolver.resolve(SequenceNode,A.value,_B);C=A.comment;D=_A;H=_A
+-				if A.flow_style is _B:
+-					if C:H=C[0]
+-				if C and len(C)>2:D=C[2]
+-				else:D=_A
+-				B.emitter.emit(SequenceStartEvent(G,A.tag,E,flow_style=A.flow_style,comment=A.comment));F=0
+-				for M in A.value:B.serialize_node(M,A,F);F+=1
+-				B.emitter.emit(SequenceEndEvent(comment=[H,D]))
+-			elif isinstance(A,MappingNode):
+-				E=A.tag==B.resolver.resolve(MappingNode,A.value,_B);C=A.comment;D=_A;I=_A
+-				if A.flow_style is _B:
+-					if C:I=C[0]
+-				if C and len(C)>2:D=C[2]
+-				B.emitter.emit(MappingStartEvent(G,A.tag,E,flow_style=A.flow_style,comment=A.comment,nr_items=len(A.value)))
+-				for (J,N) in A.value:B.serialize_node(J,A,_A);B.serialize_node(N,A,J)
+-				B.emitter.emit(MappingEndEvent(comment=[I,D]))
+-			B.resolver.ascend_resolver()
+-def templated_id(s):return Serializer.ANCHOR_RE.match(s)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/setup.cfg b/dynaconf/vendor/ruamel/yaml/setup.cfg
+deleted file mode 100644
+index 8bfd5a1..0000000
+--- a/dynaconf/vendor/ruamel/yaml/setup.cfg
++++ /dev/null
+@@ -1,4 +0,0 @@
+-[egg_info]
+-tag_build = 
+-tag_date = 0
+-
+diff --git a/dynaconf/vendor/ruamel/yaml/setup.py b/dynaconf/vendor/ruamel/yaml/setup.py
+deleted file mode 100644
+index 690e172..0000000
+--- a/dynaconf/vendor/ruamel/yaml/setup.py
++++ /dev/null
+@@ -1,402 +0,0 @@
+-from __future__ import print_function,absolute_import,division,unicode_literals
+-_V='bdist_wheel'
+-_U='--version'
+-_T='extra_packages'
+-_S='universal'
+-_R='nested'
+-_Q='setting  distdir {}/{}'
+-_P='nsp'
+-_O='PYDISTBASE'
+-_N='True'
+-_M='DVDEBUG'
+-_L='LICENSE'
+-_K='Jython'
+-_J='install'
+-_I='full_package_name'
+-_H='__init__.py'
+-_G='python'
+-_F='setup.py'
+-_E='utf-8'
+-_D=True
+-_C=False
+-_B='.'
+-_A=None
+-import sys,os,datetime,traceback
+-sys.path=[path for path in sys.path if path not in[os.getcwd(),'']]
+-import platform
+-from _ast import *
+-from ast import parse
+-from setuptools import setup,Extension,Distribution
+-from setuptools.command import install_lib
+-from setuptools.command.sdist import sdist as _sdist
+-try:from setuptools.namespaces import Installer as NameSpaceInstaller
+-except ImportError:msg='You should use the latest setuptools. The namespaces.py file that this setup.py uses was added in setuptools 28.7.0 (Oct 2016)';print(msg);sys.exit()
+-if __name__!='__main__':raise NotImplementedError('should never include setup.py')
+-full_package_name=_A
+-if sys.version_info<(3,):string_type=basestring
+-else:string_type=str
+-if sys.version_info<(3,4):
+-	class Bytes:0
+-	class NameConstant:0
+-if sys.version_info>=(3,8):from ast import Str,Num,Bytes,NameConstant
+-if sys.version_info<(3,):open_kw=dict()
+-else:open_kw=dict(encoding=_E)
+-if sys.version_info<(2,7)or platform.python_implementation()==_K:
+-	class Set:0
+-if os.environ.get(_M,'')=='':
+-	def debug(*args,**kw):0
+-else:
+-	def debug(*args,**kw):
+-		with open(os.environ[_M],'a')as fp:kw1=kw.copy();kw1['file']=fp;print('{:%Y-%d-%mT%H:%M:%S}'.format(datetime.datetime.now()),file=fp,end=' ');print(*args,**kw1)
+-def literal_eval(node_or_string):
+-	_safe_names={'None':_A,_N:_D,'False':_C}
+-	if isinstance(node_or_string,string_type):node_or_string=parse(node_or_string,mode='eval')
+-	if isinstance(node_or_string,Expression):node_or_string=node_or_string.body
+-	else:raise TypeError('only string or AST nodes supported')
+-	def _convert(node):
+-		if isinstance(node,Str):
+-			if sys.version_info<(3,)and not isinstance(node.s,unicode):return node.s.decode(_E)
+-			return node.s
+-		elif isinstance(node,Bytes):return node.s
+-		elif isinstance(node,Num):return node.n
+-		elif isinstance(node,Tuple):return tuple(map(_convert,node.elts))
+-		elif isinstance(node,List):return list(map(_convert,node.elts))
+-		elif isinstance(node,Set):return set(map(_convert,node.elts))
+-		elif isinstance(node,Dict):return dict(((_convert(k),_convert(v))for(k,v)in zip(node.keys,node.values)))
+-		elif isinstance(node,NameConstant):return node.value
+-		elif sys.version_info<(3,4)and isinstance(node,Name):
+-			if node.id in _safe_names:return _safe_names[node.id]
+-		elif isinstance(node,UnaryOp)and isinstance(node.op,(UAdd,USub))and isinstance(node.operand,(Num,UnaryOp,BinOp)):
+-			operand=_convert(node.operand)
+-			if isinstance(node.op,UAdd):return+operand
+-			else:return-operand
+-		elif isinstance(node,BinOp)and isinstance(node.op,(Add,Sub))and isinstance(node.right,(Num,UnaryOp,BinOp))and isinstance(node.left,(Num,UnaryOp,BinOp)):
+-			left=_convert(node.left);right=_convert(node.right)
+-			if isinstance(node.op,Add):return left+right
+-			else:return left-right
+-		elif isinstance(node,Call):
+-			func_id=getattr(node.func,'id',_A)
+-			if func_id=='dict':return dict(((k.arg,_convert(k.value))for k in node.keywords))
+-			elif func_id=='set':return set(_convert(node.args[0]))
+-			elif func_id=='date':return datetime.date(*[_convert(k)for k in node.args])
+-			elif func_id=='datetime':return datetime.datetime(*[_convert(k)for k in node.args])
+-		err=SyntaxError('malformed node or string: '+repr(node));err.filename='<string>';err.lineno=node.lineno;err.offset=node.col_offset;err.text=repr(node);err.node=node;raise err
+-	return _convert(node_or_string)
+-def _package_data(fn):
+-	data={}
+-	with open(fn,**open_kw)as fp:
+-		parsing=_C;lines=[]
+-		for line in fp.readlines():
+-			if sys.version_info<(3,):line=line.decode(_E)
+-			if line.startswith('_package_data'):
+-				if'dict('in line:parsing=_G;lines.append('dict(\n')
+-				elif line.endswith('= {\n'):parsing=_G;lines.append('{\n')
+-				else:raise NotImplementedError
+-				continue
+-			if not parsing:continue
+-			if parsing==_G:
+-				if line.startswith(')')or line.startswith('}'):
+-					lines.append(line)
+-					try:data=literal_eval(''.join(lines))
+-					except SyntaxError as e:
+-						context=2;from_line=e.lineno-(context+1);to_line=e.lineno+(context-1);w=len(str(to_line))
+-						for (index,line) in enumerate(lines):
+-							if from_line<=index<=to_line:
+-								print('{0:{1}}: {2}'.format(index,w,line).encode(_E),end='')
+-								if index==e.lineno-1:print('{0:{1}}  {2}^--- {3}'.format(' ',w,' '*e.offset,e.node))
+-						raise
+-					break
+-				lines.append(line)
+-			else:raise NotImplementedError
+-	return data
+-pkg_data=_package_data(__file__.replace(_F,_H))
+-exclude_files=[_F]
+-def _check_convert_version(tup):
+-	ret_val=str(tup[0]);next_sep=_B;nr_digits=0;post_dev=_C
+-	for x in tup[1:]:
+-		if isinstance(x,int):
+-			nr_digits+=1
+-			if nr_digits>2:raise ValueError('too many consecutive digits after '+ret_val)
+-			ret_val+=next_sep+str(x);next_sep=_B;continue
+-		first_letter=x[0].lower();next_sep=''
+-		if first_letter in'abcr':
+-			if post_dev:raise ValueError('release level specified after post/dev: '+x)
+-			nr_digits=0;ret_val+='rc'if first_letter=='r'else first_letter
+-		elif first_letter in'pd':nr_digits=1;post_dev=_D;ret_val+='.post'if first_letter=='p'else'.dev'
+-		else:raise ValueError('First letter of "'+x+'" not recognised')
+-	if nr_digits==1 and post_dev:ret_val+='0'
+-	return ret_val
+-version_info=pkg_data['version_info']
+-version_str=_check_convert_version(version_info)
+-class MyInstallLib(install_lib.install_lib):
+-	def install(self):
+-		fpp=pkg_data[_I].split(_B);full_exclude_files=[os.path.join(*fpp+[x])for x in exclude_files];alt_files=[];outfiles=install_lib.install_lib.install(self)
+-		for x in outfiles:
+-			for full_exclude_file in full_exclude_files:
+-				if full_exclude_file in x:os.remove(x);break
+-			else:alt_files.append(x)
+-		return alt_files
+-class MySdist(_sdist):
+-	def initialize_options(self):
+-		_sdist.initialize_options(self);dist_base=os.environ.get(_O);fpn=getattr(getattr(self,_P,self),_I,_A)
+-		if fpn and dist_base:print(_Q.format(dist_base,fpn));self.dist_dir=os.path.join(dist_base,fpn)
+-try:
+-	from wheel.bdist_wheel import bdist_wheel as _bdist_wheel
+-	class MyBdistWheel(_bdist_wheel):
+-		def initialize_options(self):
+-			_bdist_wheel.initialize_options(self);dist_base=os.environ.get(_O);fpn=getattr(getattr(self,_P,self),_I,_A)
+-			if fpn and dist_base:print(_Q.format(dist_base,fpn));self.dist_dir=os.path.join(dist_base,fpn)
+-	_bdist_wheel_available=_D
+-except ImportError:_bdist_wheel_available=_C
+-class NameSpacePackager:
+-	def __init__(self,pkg_data):
+-		assert isinstance(pkg_data,dict);self._pkg_data=pkg_data;self.full_package_name=self.pn(self._pkg_data[_I]);self._split=_A;self.depth=self.full_package_name.count(_B);self.nested=self._pkg_data.get(_R,_C)
+-		if self.nested:NameSpaceInstaller.install_namespaces=lambda x:_A
+-		self.command=_A;self.python_version();self._pkg=[_A,_A]
+-		if sys.argv[0]==_F and sys.argv[1]==_J and'--single-version-externally-managed'not in sys.argv:
+-			if os.environ.get('READTHEDOCS',_A)==_N:os.system('pip install .');sys.exit(0)
+-			if not os.environ.get('RUAMEL_NO_PIP_INSTALL_CHECK',_C):print('error: you have to install with "pip install ."');sys.exit(1)
+-		if self._pkg_data.get(_S):Distribution.is_pure=lambda *args:_D
+-		else:Distribution.is_pure=lambda *args:_C
+-		for x in sys.argv:
+-			if x[0]=='-'or x==_F:continue
+-			self.command=x;break
+-	def pn(self,s):
+-		if sys.version_info<(3,)and isinstance(s,unicode):return s.encode(_E)
+-		return s
+-	@property
+-	def split(self):
+-		skip=[]
+-		if self._split is _A:
+-			fpn=self.full_package_name.split(_B);self._split=[]
+-			while fpn:self._split.insert(0,_B.join(fpn));fpn=fpn[:-1]
+-			for d in sorted(os.listdir(_B)):
+-				if not os.path.isdir(d)or d==self._split[0]or d[0]in'._':continue
+-				x=os.path.join(d,_H)
+-				if os.path.exists(x):
+-					pd=_package_data(x)
+-					if pd.get(_R,_C):skip.append(d);continue
+-					self._split.append(self.full_package_name+_B+d)
+-			if sys.version_info<(3,):self._split=[y.encode(_E)if isinstance(y,unicode)else y for y in self._split]
+-		if skip:0
+-		return self._split
+-	@property
+-	def namespace_packages(self):return self.split[:self.depth]
+-	def namespace_directories(self,depth=_A):
+-		res=[]
+-		for (index,d) in enumerate(self.split[:depth]):
+-			if index>0:d=os.path.join(*d.split(_B))
+-			res.append(_B+d)
+-		return res
+-	@property
+-	def package_dir(self):
+-		d={self.full_package_name:_B}
+-		if _T in self._pkg_data:return d
+-		if len(self.split)>1:d[self.split[0]]=self.namespace_directories(1)[0]
+-		return d
+-	def create_dirs(self):
+-		directories=self.namespace_directories(self.depth)
+-		if not directories:return
+-		if not os.path.exists(directories[0]):
+-			for d in directories:
+-				os.mkdir(d)
+-				with open(os.path.join(d,_H),'w')as fp:fp.write('import pkg_resources\npkg_resources.declare_namespace(__name__)\n')
+-	def python_version(self):
+-		supported=self._pkg_data.get('supported')
+-		if supported is _A:return
+-		if len(supported)==1:minimum=supported[0]
+-		else:
+-			for x in supported:
+-				if x[0]==sys.version_info[0]:minimum=x;break
+-			else:return
+-		if sys.version_info<minimum:print('minimum python version(s): '+str(supported));sys.exit(1)
+-	def check(self):
+-		A='develop'
+-		try:from pip.exceptions import InstallationError
+-		except ImportError:return
+-		if self.command not in[_J,A]:return
+-		prefix=self.split[0];prefixes=set([prefix,prefix.replace('_','-')])
+-		for p in sys.path:
+-			if not p:continue
+-			if os.path.exists(os.path.join(p,_F)):continue
+-			if not os.path.isdir(p):continue
+-			if p.startswith('/tmp/'):continue
+-			for fn in os.listdir(p):
+-				for pre in prefixes:
+-					if fn.startswith(pre):break
+-				else:continue
+-				full_name=os.path.join(p,fn)
+-				if fn==prefix and os.path.isdir(full_name):
+-					if self.command==A:raise InstallationError('Cannot mix develop (pip install -e),\nwith non-develop installs for package name {0}'.format(fn))
+-				elif fn==prefix:raise InstallationError('non directory package {0} in {1}'.format(fn,p))
+-				for pre in [x+_B for x in prefixes]:
+-					if fn.startswith(pre):break
+-				else:continue
+-				if fn.endswith('-link')and self.command==_J:raise InstallationError('Cannot mix non-develop with develop\n(pip install -e) installs for package name {0}'.format(fn))
+-	def entry_points(self,script_name=_A,package_name=_A):
+-		A='console_scripts'
+-		def pckg_entry_point(name):return '{0}{1}:main'.format(name,'.__main__'if os.path.exists('__main__.py')else'')
+-		ep=self._pkg_data.get('entry_points',_D)
+-		if isinstance(ep,dict):return ep
+-		if ep is _A:return _A
+-		if ep not in[_D,1]:
+-			if'='in ep:return{A:[ep]}
+-			script_name=ep
+-		if package_name is _A:package_name=self.full_package_name
+-		if not script_name:script_name=package_name.split(_B)[-1]
+-		return{A:['{0} = {1}'.format(script_name,pckg_entry_point(package_name))]}
+-	@property
+-	def url(self):
+-		url=self._pkg_data.get('url')
+-		if url:return url
+-		sp=self.full_package_name
+-		for ch in '_.':sp=sp.replace(ch,'-')
+-		return 'https://sourceforge.net/p/{0}/code/ci/default/tree'.format(sp)
+-	@property
+-	def author(self):return self._pkg_data['author']
+-	@property
+-	def author_email(self):return self._pkg_data['author_email']
+-	@property
+-	def license(self):
+-		lic=self._pkg_data.get('license')
+-		if lic is _A:return'MIT license'
+-		return lic
+-	def has_mit_lic(self):return'MIT'in self.license
+-	@property
+-	def description(self):return self._pkg_data['description']
+-	@property
+-	def status(self):
+-		A='β';status=self._pkg_data.get('status',A).lower()
+-		if status in['α','alpha']:return 3,'Alpha'
+-		elif status in[A,'beta']:return 4,'Beta'
+-		elif'stable'in status.lower():return 5,'Production/Stable'
+-		raise NotImplementedError
+-	@property
+-	def classifiers(self):
+-		attr='_'+sys._getframe().f_code.co_name
+-		if not hasattr(self,attr):setattr(self,attr,self._setup_classifiers())
+-		return getattr(self,attr)
+-	def _setup_classifiers(self):return sorted(set(['Development Status :: {0} - {1}'.format(*self.status),'Intended Audience :: Developers','License :: '+('OSI Approved :: MIT'if self.has_mit_lic()else'Other/Proprietary')+' License','Operating System :: OS Independent','Programming Language :: Python']+[self.pn(x)for x in self._pkg_data.get('classifiers',[])]))
+-	@property
+-	def keywords(self):return self.pn(self._pkg_data.get('keywords',[]))
+-	@property
+-	def install_requires(self):return self._analyse_packages[0]
+-	@property
+-	def install_pre(self):return self._analyse_packages[1]
+-	@property
+-	def _analyse_packages(self):
+-		if self._pkg[0]is _A:self._pkg[0]=[];self._pkg[1]=[]
+-		ir=self._pkg_data.get('install_requires')
+-		if ir is _A:return self._pkg
+-		if isinstance(ir,list):self._pkg[0]=ir;return self._pkg
+-		packages=ir.get('any',[])
+-		if isinstance(packages,string_type):packages=packages.split()
+-		if self.nested:
+-			parent_pkg=self.full_package_name.rsplit(_B,1)[0]
+-			if parent_pkg not in packages:packages.append(parent_pkg)
+-		implementation=platform.python_implementation()
+-		if implementation=='CPython':pyver='py{0}{1}'.format(*sys.version_info)
+-		elif implementation=='PyPy':pyver='pypy'if sys.version_info<(3,)else'pypy3'
+-		elif implementation==_K:pyver='jython'
+-		packages.extend(ir.get(pyver,[]))
+-		for p in packages:
+-			if p[0]=='*':p=p[1:];self._pkg[1].append(p)
+-			self._pkg[0].append(p)
+-		return self._pkg
+-	@property
+-	def extras_require(self):ep=self._pkg_data.get('extras_require');return ep
+-	@property
+-	def package_data(self):
+-		df=self._pkg_data.get('data_files',[])
+-		if self.has_mit_lic():df.append(_L);exclude_files.append(_L)
+-		if self._pkg_data.get('binary_only',_C):exclude_files.append(_H)
+-		debug('testing<<<<<')
+-		if'Typing :: Typed'in self.classifiers:debug('appending');df.append('py.typed')
+-		pd=self._pkg_data.get('package_data',{})
+-		if df:pd[self.full_package_name]=df
+-		if sys.version_info<(3,):
+-			for k in pd:
+-				if isinstance(k,unicode):pd[str(k)]=pd.pop(k)
+-		return pd
+-	@property
+-	def packages(self):s=self.split;return s+self._pkg_data.get(_T,[])
+-	@property
+-	def python_requires(self):return self._pkg_data.get('python_requires',_A)
+-	@property
+-	def ext_modules(self):
+-		I='Exception:';H='link error';G='compile error:';F='Windows';E='lib';D='src';C='ext_modules';B='test';A='name'
+-		if hasattr(self,'_ext_modules'):return self._ext_modules
+-		if _U in sys.argv:return _A
+-		if platform.python_implementation()==_K:return _A
+-		try:
+-			plat=sys.argv.index('--plat-name')
+-			if'win'in sys.argv[plat+1]:return _A
+-		except ValueError:pass
+-		self._ext_modules=[];no_test_compile=_C
+-		if'--restructuredtext'in sys.argv:no_test_compile=_D
+-		elif'sdist'in sys.argv:no_test_compile=_D
+-		if no_test_compile:
+-			for target in self._pkg_data.get(C,[]):ext=Extension(self.pn(target[A]),sources=[self.pn(x)for x in target[D]],libraries=[self.pn(x)for x in target.get(E)]);self._ext_modules.append(ext)
+-			return self._ext_modules
+-		print('sys.argv',sys.argv);import tempfile,shutil;from textwrap import dedent;import distutils.sysconfig,distutils.ccompiler;from distutils.errors import CompileError,LinkError
+-		for target in self._pkg_data.get(C,[]):
+-			ext=Extension(self.pn(target[A]),sources=[self.pn(x)for x in target[D]],libraries=[self.pn(x)for x in target.get(E)])
+-			if B not in target:self._ext_modules.append(ext);continue
+-			if sys.version_info[:2]==(3,4)and platform.system()==F:
+-				if'FORCE_C_BUILD_TEST'not in os.environ:self._ext_modules.append(ext);continue
+-			c_code=dedent(target[B])
+-			try:
+-				tmp_dir=tempfile.mkdtemp(prefix='tmp_ruamel_');bin_file_name=B+self.pn(target[A]);print('test compiling',bin_file_name);file_name=os.path.join(tmp_dir,bin_file_name+'.c')
+-				with open(file_name,'w')as fp:fp.write(c_code)
+-				compiler=distutils.ccompiler.new_compiler();assert isinstance(compiler,distutils.ccompiler.CCompiler);distutils.sysconfig.customize_compiler(compiler);compiler.add_include_dir(os.getcwd())
+-				if sys.version_info<(3,):tmp_dir=tmp_dir.encode(_E)
+-				compile_out_dir=tmp_dir
+-				try:compiler.link_executable(compiler.compile([file_name],output_dir=compile_out_dir),bin_file_name,output_dir=tmp_dir,libraries=ext.libraries)
+-				except CompileError:debug(G,file_name);print(G,file_name);continue
+-				except LinkError:debug(H,file_name);print(H,file_name);continue
+-				self._ext_modules.append(ext)
+-			except Exception as e:
+-				debug(I,e);print(I,e)
+-				if sys.version_info[:2]==(3,4)and platform.system()==F:traceback.print_exc()
+-			finally:shutil.rmtree(tmp_dir)
+-		return self._ext_modules
+-	@property
+-	def test_suite(self):return self._pkg_data.get('test_suite')
+-	def wheel(self,kw,setup):
+-		if _V not in sys.argv:return _C
+-		file_name='setup.cfg'
+-		if os.path.exists(file_name):return _C
+-		with open(file_name,'w')as fp:
+-			if os.path.exists(_L):fp.write('[metadata]\nlicense-file = LICENSE\n')
+-			else:print('\n\n>>>>>> LICENSE file not found <<<<<\n\n')
+-			if self._pkg_data.get(_S):fp.write('[bdist_wheel]\nuniversal = 1\n')
+-		try:setup(**kw)
+-		except Exception:raise
+-		finally:os.remove(file_name)
+-		return _D
+-def main():
+-	A='tarfmt';dump_kw='--dump-kw'
+-	if dump_kw in sys.argv:import wheel,distutils,setuptools;print('python:    ',sys.version);print('setuptools:',setuptools.__version__);print('distutils: ',distutils.__version__);print('wheel:     ',wheel.__version__)
+-	nsp=NameSpacePackager(pkg_data);nsp.check();nsp.create_dirs();MySdist.nsp=nsp
+-	if pkg_data.get(A):MySdist.tarfmt=pkg_data.get(A)
+-	cmdclass=dict(install_lib=MyInstallLib,sdist=MySdist)
+-	if _bdist_wheel_available:MyBdistWheel.nsp=nsp;cmdclass[_V]=MyBdistWheel
+-	kw=dict(name=nsp.full_package_name,namespace_packages=nsp.namespace_packages,version=version_str,packages=nsp.packages,python_requires=nsp.python_requires,url=nsp.url,author=nsp.author,author_email=nsp.author_email,cmdclass=cmdclass,package_dir=nsp.package_dir,entry_points=nsp.entry_points(),description=nsp.description,install_requires=nsp.install_requires,extras_require=nsp.extras_require,license=nsp.license,classifiers=nsp.classifiers,keywords=nsp.keywords,package_data=nsp.package_data,ext_modules=nsp.ext_modules,test_suite=nsp.test_suite)
+-	if _U not in sys.argv and('--verbose'in sys.argv or dump_kw in sys.argv):
+-		for k in sorted(kw):v=kw[k];print('  "{0}": "{1}",'.format(k,v))
+-	if dump_kw in sys.argv:sys.argv.remove(dump_kw)
+-	try:
+-		with open('README.rst')as fp:kw['long_description']=fp.read();kw['long_description_content_type']='text/x-rst'
+-	except Exception:pass
+-	if nsp.wheel(kw,setup):return
+-	for x in ['-c','egg_info','--egg-base','pip-egg-info']:
+-		if x not in sys.argv:break
+-	else:
+-		for p in nsp.install_pre:
+-			import subprocess;setup_path=os.path.join(*p.split(_B)+[_F]);try_dir=os.path.dirname(sys.executable)
+-			while len(try_dir)>1:
+-				full_path_setup_py=os.path.join(try_dir,setup_path)
+-				if os.path.exists(full_path_setup_py):pip=sys.executable.replace(_G,'pip');cmd=[pip,_J,os.path.dirname(full_path_setup_py)];subprocess.check_output(cmd);break
+-				try_dir=os.path.dirname(try_dir)
+-	setup(**kw)
+-main()
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/timestamp.py b/dynaconf/vendor/ruamel/yaml/timestamp.py
+deleted file mode 100644
+index fafab94..0000000
+--- a/dynaconf/vendor/ruamel/yaml/timestamp.py
++++ /dev/null
+@@ -1,8 +0,0 @@
+-from __future__ import print_function,absolute_import,division,unicode_literals
+-_A=False
+-import datetime,copy
+-if _A:from typing import Any,Dict,Optional,List
+-class TimeStamp(datetime.datetime):
+-	def __init__(A,*B,**C):A._yaml=dict(t=_A,tz=None,delta=0)
+-	def __new__(A,*B,**C):return datetime.datetime.__new__(A,*B,**C)
+-	def __deepcopy__(A,memo):B=TimeStamp(A.year,A.month,A.day,A.hour,A.minute,A.second);B._yaml=copy.deepcopy(A._yaml);return B
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/tokens.py b/dynaconf/vendor/ruamel/yaml/tokens.py
+deleted file mode 100644
+index 97b4c46..0000000
+--- a/dynaconf/vendor/ruamel/yaml/tokens.py
++++ /dev/null
+@@ -1,97 +0,0 @@
+-from __future__ import unicode_literals
+-_F=', line: '
+-_E='pre_done'
+-_D=False
+-_C='value'
+-_B='_comment'
+-_A=None
+-if _D:from typing import Text,Any,Dict,Optional,List;from .error import StreamMark
+-SHOWLINES=True
+-class Token:
+-	__slots__='start_mark','end_mark',_B
+-	def __init__(A,start_mark,end_mark):A.start_mark=start_mark;A.end_mark=end_mark
+-	def __repr__(A):
+-		C=[B for B in A.__slots__ if not B.endswith('_mark')];C.sort();B=', '.join(['%s=%r'%(B,getattr(A,B))for B in C])
+-		if SHOWLINES:
+-			try:B+=_F+str(A.start_mark.line)
+-			except:pass
+-		try:B+=', comment: '+str(A._comment)
+-		except:pass
+-		return '{}({})'.format(A.__class__.__name__,B)
+-	def add_post_comment(A,comment):
+-		if not hasattr(A,_B):A._comment=[_A,_A]
+-		A._comment[0]=comment
+-	def add_pre_comments(A,comments):
+-		if not hasattr(A,_B):A._comment=[_A,_A]
+-		assert A._comment[1]is _A;A._comment[1]=comments
+-	def get_comment(A):return getattr(A,_B,_A)
+-	@property
+-	def comment(self):return getattr(self,_B,_A)
+-	def move_comment(C,target,empty=_D):
+-		D=target;A=C.comment
+-		if A is _A:return
+-		if isinstance(D,(StreamEndToken,DocumentStartToken)):return
+-		delattr(C,_B);B=D.comment
+-		if not B:
+-			if empty:A=[A[0],A[1],_A,_A,A[0]]
+-			D._comment=A;return C
+-		if A[0]and B[0]or A[1]and B[1]:raise NotImplementedError('overlap in comment %r %r'%(A,B))
+-		if A[0]:B[0]=A[0]
+-		if A[1]:B[1]=A[1]
+-		return C
+-	def split_comment(B):
+-		A=B.comment
+-		if A is _A or A[0]is _A:return _A
+-		C=[A[0],_A]
+-		if A[1]is _A:delattr(B,_B)
+-		return C
+-class DirectiveToken(Token):
+-	__slots__='name',_C;id='<directive>'
+-	def __init__(A,name,value,start_mark,end_mark):Token.__init__(A,start_mark,end_mark);A.name=name;A.value=value
+-class DocumentStartToken(Token):__slots__=();id='<document start>'
+-class DocumentEndToken(Token):__slots__=();id='<document end>'
+-class StreamStartToken(Token):
+-	__slots__='encoding',;id='<stream start>'
+-	def __init__(A,start_mark=_A,end_mark=_A,encoding=_A):Token.__init__(A,start_mark,end_mark);A.encoding=encoding
+-class StreamEndToken(Token):__slots__=();id='<stream end>'
+-class BlockSequenceStartToken(Token):__slots__=();id='<block sequence start>'
+-class BlockMappingStartToken(Token):__slots__=();id='<block mapping start>'
+-class BlockEndToken(Token):__slots__=();id='<block end>'
+-class FlowSequenceStartToken(Token):__slots__=();id='['
+-class FlowMappingStartToken(Token):__slots__=();id='{'
+-class FlowSequenceEndToken(Token):__slots__=();id=']'
+-class FlowMappingEndToken(Token):__slots__=();id='}'
+-class KeyToken(Token):__slots__=();id='?'
+-class ValueToken(Token):__slots__=();id=':'
+-class BlockEntryToken(Token):__slots__=();id='-'
+-class FlowEntryToken(Token):__slots__=();id=','
+-class AliasToken(Token):
+-	__slots__=_C,;id='<alias>'
+-	def __init__(A,value,start_mark,end_mark):Token.__init__(A,start_mark,end_mark);A.value=value
+-class AnchorToken(Token):
+-	__slots__=_C,;id='<anchor>'
+-	def __init__(A,value,start_mark,end_mark):Token.__init__(A,start_mark,end_mark);A.value=value
+-class TagToken(Token):
+-	__slots__=_C,;id='<tag>'
+-	def __init__(A,value,start_mark,end_mark):Token.__init__(A,start_mark,end_mark);A.value=value
+-class ScalarToken(Token):
+-	__slots__=_C,'plain','style';id='<scalar>'
+-	def __init__(A,value,plain,start_mark,end_mark,style=_A):Token.__init__(A,start_mark,end_mark);A.value=value;A.plain=plain;A.style=style
+-class CommentToken(Token):
+-	__slots__=_C,_E;id='<comment>'
+-	def __init__(A,value,start_mark,end_mark):Token.__init__(A,start_mark,end_mark);A.value=value
+-	def reset(A):
+-		if hasattr(A,_E):delattr(A,_E)
+-	def __repr__(A):
+-		B='{!r}'.format(A.value)
+-		if SHOWLINES:
+-			try:B+=_F+str(A.start_mark.line);B+=', col: '+str(A.start_mark.column)
+-			except:pass
+-		return 'CommentToken({})'.format(B)
+-	def __eq__(A,other):
+-		B=other
+-		if A.start_mark!=B.start_mark:return _D
+-		if A.end_mark!=B.end_mark:return _D
+-		if A.value!=B.value:return _D
+-		return True
+-	def __ne__(A,other):return not A.__eq__(other)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/ruamel/yaml/util.py b/dynaconf/vendor/ruamel/yaml/util.py
+deleted file mode 100644
+index c8c1c6b..0000000
+--- a/dynaconf/vendor/ruamel/yaml/util.py
++++ /dev/null
+@@ -1,69 +0,0 @@
+-from __future__ import absolute_import,print_function
+-_B='lazy_self'
+-_A=' '
+-from functools import partial
+-import re
+-from .compat import text_type,binary_type
+-if False:from typing import Any,Dict,Optional,List,Text;from .compat import StreamTextType
+-class LazyEval:
+-	def __init__(A,func,*C,**D):
+-		def B():B=func(*C,**D);object.__setattr__(A,_B,lambda:B);return B
+-		object.__setattr__(A,_B,B)
+-	def __getattribute__(B,name):
+-		A=object.__getattribute__(B,_B)
+-		if name==_B:return A
+-		return getattr(A(),name)
+-	def __setattr__(A,name,value):setattr(A.lazy_self(),name,value)
+-RegExp=partial(LazyEval,re.compile)
+-def load_yaml_guess_indent(stream,**N):
+-	D=stream;B=None;from .main import round_trip_load as O
+-	def K(l):
+-		A=0
+-		while A<len(l)and l[A]==_A:A+=1
+-		return A
+-	if isinstance(D,text_type):F=D
+-	elif isinstance(D,binary_type):F=D.decode('utf-8')
+-	else:F=D.read()
+-	G=B;H=B;L=B;E=B;I=0
+-	for C in F.splitlines():
+-		J=C.rstrip();P=J.lstrip()
+-		if P.startswith('- '):
+-			M=K(C);L=M-I;A=M+1
+-			while C[A]==_A:A+=1
+-			if C[A]=='#':continue
+-			H=A-I;break
+-		if G is B and E is not B and J:
+-			A=0
+-			while C[A]in' -':A+=1
+-			if A>E:G=A-E
+-		if J.endswith(':'):
+-			I=K(C);A=0
+-			while C[A]==_A:A+=1
+-			E=A;continue
+-		E=B
+-	if H is B and G is not B:H=G
+-	return O(F,**N),H,L
+-def configobj_walker(cfg):
+-	B=cfg;from configobj import ConfigObj as D;assert isinstance(B,D)
+-	for A in B.initial_comment:
+-		if A.strip():yield A
+-	for C in _walk_section(B):
+-		if C.strip():yield C
+-	for A in B.final_comment:
+-		if A.strip():yield A
+-def _walk_section(s,level=0):
+-	L='  ';I="'";H='\n';F=level;from configobj import Section as J;assert isinstance(s,J);D=L*F
+-	for A in s.scalars:
+-		for B in s.comments[A]:yield D+B.strip()
+-		C=s[A]
+-		if H in C:G=D+L;C='|\n'+G+C.strip().replace(H,H+G)
+-		elif':'in C:C=I+C.replace(I,"''")+I
+-		E='{0}{1}: {2}'.format(D,A,C);B=s.inline_comments[A]
+-		if B:E+=_A+B
+-		yield E
+-	for A in s.sections:
+-		for B in s.comments[A]:yield D+B.strip()
+-		E='{0}{1}:'.format(D,A);B=s.inline_comments[A]
+-		if B:E+=_A+B
+-		yield E
+-		for K in _walk_section(s[A],level=F+1):yield K
+\ No newline at end of file
+diff --git a/dynaconf/vendor/toml/README.md b/dynaconf/vendor/toml/README.md
+deleted file mode 100644
+index cbe16fd..0000000
+--- a/dynaconf/vendor/toml/README.md
++++ /dev/null
+@@ -1,5 +0,0 @@
+-## python-toml
+-
+-Vendored dep taken from: https://github.com/uiri/toml
+-Licensed under BSD: https://github.com/uiri/toml/blob/master/LICENSE
+-Current version: 0.10.8
+diff --git a/dynaconf/vendor/toml/__init__.py b/dynaconf/vendor/toml/__init__.py
+deleted file mode 100644
+index 45ba32b..0000000
+--- a/dynaconf/vendor/toml/__init__.py
++++ /dev/null
+@@ -1,16 +0,0 @@
+-from .  import encoder,decoder
+-__version__='0.10.1'
+-_spec_='0.5.0'
+-load=decoder.load
+-loads=decoder.loads
+-TomlDecoder=decoder.TomlDecoder
+-TomlDecodeError=decoder.TomlDecodeError
+-TomlPreserveCommentDecoder=decoder.TomlPreserveCommentDecoder
+-dump=encoder.dump
+-dumps=encoder.dumps
+-TomlEncoder=encoder.TomlEncoder
+-TomlArraySeparatorEncoder=encoder.TomlArraySeparatorEncoder
+-TomlPreserveInlineDictEncoder=encoder.TomlPreserveInlineDictEncoder
+-TomlNumpyEncoder=encoder.TomlNumpyEncoder
+-TomlPreserveCommentEncoder=encoder.TomlPreserveCommentEncoder
+-TomlPathlibEncoder=encoder.TomlPathlibEncoder
+\ No newline at end of file
+diff --git a/dynaconf/vendor/toml/decoder.py b/dynaconf/vendor/toml/decoder.py
+deleted file mode 100644
+index 315c36e..0000000
+--- a/dynaconf/vendor/toml/decoder.py
++++ /dev/null
+@@ -1,515 +0,0 @@
+-_W='Reserved escape sequence used'
+-_V='\\U'
+-_U='false'
+-_T='true'
+-_S='\t'
+-_R='}'
+-_Q='+'
+-_P='_'
+-_O='-'
+-_N=','
+-_M=']'
+-_L=' '
+-_K='{'
+-_J='='
+-_I='['
+-_H='\\'
+-_G='\n'
+-_F='.'
+-_E=None
+-_D="'"
+-_C='"'
+-_B=True
+-_A=False
+-import datetime,io
+-from os import linesep
+-import re,sys
+-from .tz import TomlTz
+-if sys.version_info<(3,):_range=xrange
+-else:unicode=str;_range=range;basestring=str;unichr=chr
+-def _detect_pathlib_path(p):
+-	if(3,4)<=sys.version_info:
+-		import pathlib as A
+-		if isinstance(p,A.PurePath):return _B
+-	return _A
+-def _ispath(p):
+-	if isinstance(p,(bytes,basestring)):return _B
+-	return _detect_pathlib_path(p)
+-def _getpath(p):
+-	if(3,6)<=sys.version_info:import os;return os.fspath(p)
+-	if _detect_pathlib_path(p):return str(p)
+-	return p
+-try:FNFError=FileNotFoundError
+-except NameError:FNFError=IOError
+-TIME_RE=re.compile('([0-9]{2}):([0-9]{2}):([0-9]{2})(\\.([0-9]{3,6}))?')
+-class TomlDecodeError(ValueError):
+-	def __init__(A,msg,doc,pos):C=doc;B=pos;D=C.count(_G,0,B)+1;E=B-C.rfind(_G,0,B);F='{} (line {} column {} char {})'.format(msg,D,E,B);ValueError.__init__(A,F);A.msg=msg;A.doc=C;A.pos=B;A.lineno=D;A.colno=E
+-_number_with_underscores=re.compile('([0-9])(_([0-9]))*')
+-class CommentValue:
+-	def __init__(A,val,comment,beginline,_dict):A.val=val;B=_G if beginline else _L;A.comment=B+comment;A._dict=_dict
+-	def __getitem__(A,key):return A.val[key]
+-	def __setitem__(A,key,value):A.val[key]=value
+-	def dump(A,dump_value_func):
+-		B=dump_value_func(A.val)
+-		if isinstance(A.val,A._dict):return A.comment+_G+unicode(B)
+-		else:return unicode(B)+A.comment
+-def _strictly_valid_num(n):
+-	n=n.strip()
+-	if not n:return _A
+-	if n[0]==_P:return _A
+-	if n[-1]==_P:return _A
+-	if'_.'in n or'._'in n:return _A
+-	if len(n)==1:return _B
+-	if n[0]=='0'and n[1]not in[_F,'o','b','x']:return _A
+-	if n[0]==_Q or n[0]==_O:
+-		n=n[1:]
+-		if len(n)>1 and n[0]=='0'and n[1]!=_F:return _A
+-	if'__'in n:return _A
+-	return _B
+-def load(f,_dict=dict,decoder=_E):
+-	B=_dict;A=decoder
+-	if _ispath(f):
+-		with io.open(_getpath(f),encoding='utf-8')as G:return loads(G.read(),B,A)
+-	elif isinstance(f,list):
+-		from os import path as D;from warnings import warn
+-		if not[A for A in f if D.exists(A)]:C='Load expects a list to contain filenames only.';C+=linesep;C+='The list needs to contain the path of at least one existing file.';raise FNFError(C)
+-		if A is _E:A=TomlDecoder(B)
+-		E=A.get_empty_table()
+-		for F in f:
+-			if D.exists(F):E.update(load(F,B,A))
+-			else:warn('Non-existent filename in list with at least one valid filename')
+-		return E
+-	else:
+-		try:return loads(f.read(),B,A)
+-		except AttributeError:raise TypeError('You can only load a file descriptor, filename or list')
+-_groupname_re=re.compile('^[A-Za-z0-9_-]+$')
+-def loads(s,_dict=dict,decoder=_E):
+-	q="Invalid group name '";K=decoder;d=[]
+-	if K is _E:K=TomlDecoder(_dict)
+-	e=K.get_empty_table();G=e
+-	if not isinstance(s,basestring):raise TypeError('Expecting something like a string')
+-	if not isinstance(s,unicode):s=s.decode('utf8')
+-	I=s;B=list(s);b=0;J=_A;Q='';D=_A;L=_A;V=_B;U=_A;W=_A;O=0;c='';j='';k=1
+-	for (A,E) in enumerate(B):
+-		if E=='\r'and B[A+1]==_G:B[A]=_L;continue
+-		if O:
+-			c+=E
+-			if E==_G:raise TomlDecodeError('Key name found without value. Reached end of line.',I,A)
+-			if J:
+-				if E==Q:
+-					S=_A;F=1
+-					while A>=F and B[A-F]==_H:S=not S;F+=1
+-					if not S:O=2;J=_A;Q=''
+-				continue
+-			elif O==1:
+-				if E.isspace():O=2;continue
+-				elif E==_F:W=_B;continue
+-				elif E.isalnum()or E==_P or E==_O:continue
+-				elif W and B[A-1]==_F and(E==_C or E==_D):J=_B;Q=E;continue
+-			elif O==2:
+-				if E.isspace():
+-					if W:
+-						X=B[A+1]
+-						if not X.isspace()and X!=_F:O=1
+-					continue
+-				if E==_F:
+-					W=_B;X=B[A+1]
+-					if not X.isspace()and X!=_F:O=1
+-					continue
+-			if E==_J:O=0;j=c[:-1].rstrip();c='';W=_A
+-			else:raise TomlDecodeError("Found invalid character in key name: '"+E+"'. Try quoting the key name.",I,A)
+-		if E==_D and Q!=_C:
+-			F=1
+-			try:
+-				while B[A-F]==_D:
+-					F+=1
+-					if F==3:break
+-			except IndexError:pass
+-			if F==3:D=not D;J=D
+-			else:J=not J
+-			if J:Q=_D
+-			else:Q=''
+-		if E==_C and Q!=_D:
+-			S=_A;F=1;f=_A
+-			try:
+-				while B[A-F]==_C:
+-					F+=1
+-					if F==3:f=_B;break
+-				if F==1 or F==3 and f:
+-					while B[A-F]==_H:S=not S;F+=1
+-			except IndexError:pass
+-			if not S:
+-				if f:D=not D;J=D
+-				else:J=not J
+-			if J:Q=_C
+-			else:Q=''
+-		if E=='#'and(not J and not U and not L):
+-			R=A;l=''
+-			try:
+-				while B[R]!=_G:l+=s[R];B[R]=_L;R+=1
+-			except IndexError:break
+-			if not b:K.preserve_comment(k,j,l,V)
+-		if E==_I and(not J and not U and not L):
+-			if V:
+-				if len(B)>A+1 and B[A+1]==_I:L=_B
+-				else:U=_B
+-			else:b+=1
+-		if E==_M and not J:
+-			if U:U=_A
+-			elif L:
+-				if B[A-1]==_M:L=_A
+-			else:b-=1
+-		if E==_G:
+-			if J or D:
+-				if not D:raise TomlDecodeError('Unbalanced quotes',I,A)
+-				if(B[A-1]==_D or B[A-1]==_C)and B[A-2]==B[A-1]:
+-					B[A]=B[A-1]
+-					if B[A-3]==B[A-1]:B[A-3]=_L
+-			elif b:B[A]=_L
+-			else:V=_B
+-			k+=1
+-		elif V and B[A]!=_L and B[A]!=_S:
+-			V=_A
+-			if not U and not L:
+-				if B[A]==_J:raise TomlDecodeError('Found empty keyname. ',I,A)
+-				O=1;c+=E
+-	if O:raise TomlDecodeError('Key name found without value. Reached end of file.',I,len(s))
+-	if J:raise TomlDecodeError('Unterminated string found. Reached end of file.',I,len(s))
+-	s=''.join(B);s=s.split(_G);T=_E;D='';P=_A;N=0
+-	for (g,C) in enumerate(s):
+-		if g>0:N+=len(s[g-1])+1
+-		K.embed_comments(g,G)
+-		if not D or P or _G not in D:C=C.strip()
+-		if C==''and(not T or P):continue
+-		if T:
+-			if P:D+=C
+-			else:D+=C
+-			P=_A;h=_A
+-			if D[0]==_I:h=C[-1]==_M
+-			elif len(C)>2:h=C[-1]==D[0]and C[-2]==D[0]and C[-3]==D[0]
+-			if h:
+-				try:o,r=K.load_value(D)
+-				except ValueError as Y:raise TomlDecodeError(str(Y),I,N)
+-				G[T]=o;T=_E;D=''
+-			else:
+-				F=len(D)-1
+-				while F>-1 and D[F]==_H:P=not P;F-=1
+-				if P:D=D[:-1]
+-				else:D+=_G
+-			continue
+-		if C[0]==_I:
+-			L=_A
+-			if len(C)==1:raise TomlDecodeError('Opening key group bracket on line by itself.',I,N)
+-			if C[1]==_I:L=_B;C=C[2:];Z=']]'
+-			else:C=C[1:];Z=_M
+-			A=1;p=K._get_split_on_quotes(C);i=_A
+-			for m in p:
+-				if not i and Z in m:break
+-				A+=m.count(Z);i=not i
+-			C=C.split(Z,A)
+-			if len(C)<A+1 or C[-1].strip()!='':raise TomlDecodeError('Key group not on a line by itself.',I,N)
+-			H=Z.join(C[:-1]).split(_F);A=0
+-			while A<len(H):
+-				H[A]=H[A].strip()
+-				if len(H[A])>0 and(H[A][0]==_C or H[A][0]==_D):
+-					a=H[A];R=A+1
+-					while not a[0]==a[-1]:
+-						R+=1
+-						if R>len(H)+2:raise TomlDecodeError(q+a+"' Something "+'went wrong.',I,N)
+-						a=_F.join(H[A:R]).strip()
+-					H[A]=a[1:-1];H[A+1:R]=[]
+-				elif not _groupname_re.match(H[A]):raise TomlDecodeError(q+H[A]+"'. Try quoting it.",I,N)
+-				A+=1
+-			G=e
+-			for A in _range(len(H)):
+-				M=H[A]
+-				if M=='':raise TomlDecodeError("Can't have a keygroup with an empty name",I,N)
+-				try:
+-					G[M]
+-					if A==len(H)-1:
+-						if M in d:
+-							d.remove(M)
+-							if L:raise TomlDecodeError("An implicitly defined table can't be an array",I,N)
+-						elif L:G[M].append(K.get_empty_table())
+-						else:raise TomlDecodeError('What? '+M+' already exists?'+str(G),I,N)
+-				except TypeError:
+-					G=G[-1]
+-					if M not in G:
+-						G[M]=K.get_empty_table()
+-						if A==len(H)-1 and L:G[M]=[K.get_empty_table()]
+-				except KeyError:
+-					if A!=len(H)-1:d.append(M)
+-					G[M]=K.get_empty_table()
+-					if A==len(H)-1 and L:G[M]=[K.get_empty_table()]
+-				G=G[M]
+-				if L:
+-					try:G=G[-1]
+-					except KeyError:pass
+-		elif C[0]==_K:
+-			if C[-1]!=_R:raise TomlDecodeError('Line breaks are not allowed in inlineobjects',I,N)
+-			try:K.load_inline_object(C,G,T,P)
+-			except ValueError as Y:raise TomlDecodeError(str(Y),I,N)
+-		elif _J in C:
+-			try:n=K.load_line(C,G,T,P)
+-			except ValueError as Y:raise TomlDecodeError(str(Y),I,N)
+-			if n is not _E:T,D,P=n
+-	return e
+-def _load_date(val):
+-	I='Z';A=val;G=0;F=_E
+-	try:
+-		if len(A)>19:
+-			if A[19]==_F:
+-				if A[-1].upper()==I:C=A[20:-1];D=I
+-				else:
+-					B=A[20:]
+-					if _Q in B:E=B.index(_Q);C=B[:E];D=B[E:]
+-					elif _O in B:E=B.index(_O);C=B[:E];D=B[E:]
+-					else:D=_E;C=B
+-				if D is not _E:F=TomlTz(D)
+-				G=int(int(C)*10**(6-len(C)))
+-			else:F=TomlTz(A[19:])
+-	except ValueError:F=_E
+-	if _O not in A[1:]:return _E
+-	try:
+-		if len(A)==10:H=datetime.date(int(A[:4]),int(A[5:7]),int(A[8:10]))
+-		else:H=datetime.datetime(int(A[:4]),int(A[5:7]),int(A[8:10]),int(A[11:13]),int(A[14:16]),int(A[17:19]),G,F)
+-	except ValueError:return _E
+-	return H
+-def _load_unicode_escapes(v,hexbytes,prefix):
+-	G='Invalid escape sequence: ';E=prefix;C=_A;A=len(v)-1
+-	while A>-1 and v[A]==_H:C=not C;A-=1
+-	for D in hexbytes:
+-		if C:
+-			C=_A;A=len(D)-1
+-			while A>-1 and D[A]==_H:C=not C;A-=1
+-			v+=E;v+=D;continue
+-		B='';A=0;F=4
+-		if E==_V:F=8
+-		B=''.join(D[A:A+F]).lower()
+-		if B.strip('0123456789abcdef'):raise ValueError(G+B)
+-		if B[0]=='d'and B[1].strip('01234567'):raise ValueError(G+B+'. Only scalar unicode points are allowed.')
+-		v+=unichr(int(B,16));v+=unicode(D[len(B):])
+-	return v
+-_escapes=['0','b','f','n','r','t',_C]
+-_escapedchars=['\x00','\x08','\x0c',_G,'\r',_S,_C]
+-_escape_to_escapedchars=dict(zip(_escapes,_escapedchars))
+-def _unescape(v):
+-	A=0;B=_A
+-	while A<len(v):
+-		if B:
+-			B=_A
+-			if v[A]in _escapes:v=v[:A-1]+_escape_to_escapedchars[v[A]]+v[A+1:]
+-			elif v[A]==_H:v=v[:A-1]+v[A:]
+-			elif v[A]=='u'or v[A]=='U':A+=1
+-			else:raise ValueError(_W)
+-			continue
+-		elif v[A]==_H:B=_B
+-		A+=1
+-	return v
+-class InlineTableDict:0
+-class TomlDecoder:
+-	def __init__(A,_dict=dict):A._dict=_dict
+-	def get_empty_table(A):return A._dict()
+-	def get_empty_inline_table(A):
+-		class B(A._dict,InlineTableDict):0
+-		return B()
+-	def load_inline_object(E,line,currentlevel,multikey=_A,multibackslash=_A):
+-		B=line[1:-1].split(_N);D=[]
+-		if len(B)==1 and not B[0].strip():B.pop()
+-		while len(B)>0:
+-			C=B.pop(0)
+-			try:H,A=C.split(_J,1)
+-			except ValueError:raise ValueError('Invalid inline table encountered')
+-			A=A.strip()
+-			if A[0]==A[-1]and A[0]in(_C,_D)or(A[0]in'-0123456789'or A in(_T,_U)or A[0]==_I and A[-1]==_M or A[0]==_K and A[-1]==_R):D.append(C)
+-			elif len(B)>0:B[0]=C+_N+B[0]
+-			else:raise ValueError('Invalid inline table value encountered')
+-		for F in D:
+-			G=E.load_line(F,currentlevel,multikey,multibackslash)
+-			if G is not _E:break
+-	def _get_split_on_quotes(F,line):
+-		A=line.split(_C);D=_A;C=[]
+-		if len(A)>1 and _D in A[0]:
+-			B=A[0].split(_D);A=A[1:]
+-			while len(B)%2==0 and len(A):
+-				B[-1]+=_C+A[0];A=A[1:]
+-				if _D in B[-1]:B=B[:-1]+B[-1].split(_D)
+-			C+=B
+-		for E in A:
+-			if D:C.append(E)
+-			else:C+=E.split(_D);D=not D
+-		return C
+-	def load_line(E,line,currentlevel,multikey,multibackslash):
+-		S='Duplicate keys!';L=multikey;K=line;G=multibackslash;D=currentlevel;H=1;M=E._get_split_on_quotes(K);C=_A
+-		for F in M:
+-			if not C and _J in F:break
+-			H+=F.count(_J);C=not C
+-		A=K.split(_J,H);N=_strictly_valid_num(A[-1])
+-		if _number_with_underscores.match(A[-1]):A[-1]=A[-1].replace(_P,'')
+-		while len(A[-1])and(A[-1][0]!=_L and A[-1][0]!=_S and A[-1][0]!=_D and A[-1][0]!=_C and A[-1][0]!=_I and A[-1][0]!=_K and A[-1].strip()!=_T and A[-1].strip()!=_U):
+-			try:float(A[-1]);break
+-			except ValueError:pass
+-			if _load_date(A[-1])is not _E:break
+-			if TIME_RE.match(A[-1]):break
+-			H+=1;P=A[-1];A=K.split(_J,H)
+-			if P==A[-1]:raise ValueError('Invalid date or number')
+-			if N:N=_strictly_valid_num(A[-1])
+-		A=[_J.join(A[:-1]).strip(),A[-1].strip()]
+-		if _F in A[0]:
+-			if _C in A[0]or _D in A[0]:
+-				M=E._get_split_on_quotes(A[0]);C=_A;B=[]
+-				for F in M:
+-					if C:B.append(F)
+-					else:B+=[A.strip()for A in F.split(_F)]
+-					C=not C
+-			else:B=A[0].split(_F)
+-			while B[-1]=='':B=B[:-1]
+-			for I in B[:-1]:
+-				if I=='':continue
+-				if I not in D:D[I]=E.get_empty_table()
+-				D=D[I]
+-			A[0]=B[-1].strip()
+-		elif(A[0][0]==_C or A[0][0]==_D)and A[0][-1]==A[0][0]:A[0]=_unescape(A[0][1:-1])
+-		J,Q=E._load_line_multiline_str(A[1])
+-		if J>-1:
+-			while J>-1 and A[1][J+Q]==_H:G=not G;J-=1
+-			if G:O=A[1][:-1]
+-			else:O=A[1]+_G
+-			L=A[0]
+-		else:R,T=E.load_value(A[1],N)
+-		try:D[A[0]];raise ValueError(S)
+-		except TypeError:raise ValueError(S)
+-		except KeyError:
+-			if L:return L,O,G
+-			else:D[A[0]]=R
+-	def _load_line_multiline_str(C,p):
+-		B=0
+-		if len(p)<3:return-1,B
+-		if p[0]==_I and(p.strip()[-1]!=_M and C._load_array_isstrarray(p)):
+-			A=p[1:].strip().split(_N)
+-			while len(A)>1 and A[-1][0]!=_C and A[-1][0]!=_D:A=A[:-2]+[A[-2]+_N+A[-1]]
+-			A=A[-1];B=len(p)-len(A);p=A
+-		if p[0]!=_C and p[0]!=_D:return-1,B
+-		if p[1]!=p[0]or p[2]!=p[0]:return-1,B
+-		if len(p)>5 and p[-1]==p[0]and p[-2]==p[0]and p[-3]==p[0]:return-1,B
+-		return len(p)-1,B
+-	def load_value(E,v,strictly_valid=_B):
+-		a='float';Z='int';Y='bool'
+-		if not v:raise ValueError('Empty value is invalid')
+-		if v==_T:return _B,Y
+-		elif v==_U:return _A,Y
+-		elif v[0]==_C or v[0]==_D:
+-			F=v[0];B=v[1:].split(F);G=_A;H=0
+-			if len(B)>1 and B[0]==''and B[1]=='':B=B[2:];G=_B
+-			I=_A
+-			for J in B:
+-				if J=='':
+-					if G:H+=1
+-					else:I=_B
+-				else:
+-					K=_A
+-					try:
+-						A=-1;N=J[A]
+-						while N==_H:K=not K;A-=1;N=J[A]
+-					except IndexError:pass
+-					if not K:
+-						if I:raise ValueError('Found tokens after a closed '+'string. Invalid TOML.')
+-						elif not G or H>1:I=_B
+-						else:H=0
+-			if F==_C:
+-				T=v.split(_H)[1:];C=_A
+-				for A in T:
+-					if A=='':C=not C
+-					else:
+-						if A[0]not in _escapes and(A[0]!='u'and A[0]!='U'and not C):raise ValueError(_W)
+-						if C:C=_A
+-				for L in ['\\u',_V]:
+-					if L in v:O=v.split(L);v=_load_unicode_escapes(O[0],O[1:],L)
+-				v=_unescape(v)
+-			if len(v)>1 and v[1]==F and(len(v)<3 or v[1]==v[2]):v=v[2:-2]
+-			return v[1:-1],'str'
+-		elif v[0]==_I:return E.load_array(v),'array'
+-		elif v[0]==_K:P=E.get_empty_inline_table();E.load_inline_object(v,P);return P,'inline_object'
+-		elif TIME_RE.match(v):U,V,W,b,Q=TIME_RE.match(v).groups();X=datetime.time(int(U),int(V),int(W),int(Q)if Q else 0);return X,'time'
+-		else:
+-			R=_load_date(v)
+-			if R is not _E:return R,'date'
+-			if not strictly_valid:raise ValueError('Weirdness with leading zeroes or underscores in your number.')
+-			D=Z;S=_A
+-			if v[0]==_O:S=_B;v=v[1:]
+-			elif v[0]==_Q:v=v[1:]
+-			v=v.replace(_P,'');M=v.lower()
+-			if _F in v or'x'not in v and('e'in v or'E'in v):
+-				if _F in v and v.split(_F,1)[1]=='':raise ValueError('This float is missing digits after the point')
+-				if v[0]not in'0123456789':raise ValueError("This float doesn't have a leading digit")
+-				v=float(v);D=a
+-			elif len(M)==3 and(M=='inf'or M=='nan'):v=float(v);D=a
+-			if D==Z:v=int(v,0)
+-			if S:return 0-v,D
+-			return v,D
+-	def bounded_string(C,s):
+-		if len(s)==0:return _B
+-		if s[-1]!=s[0]:return _A
+-		A=-2;B=_A
+-		while len(s)+A>0:
+-			if s[A]==_H:B=not B;A-=1
+-			else:break
+-		return not B
+-	def _load_array_isstrarray(A,a):
+-		a=a[1:-1].strip()
+-		if a!=''and(a[0]==_C or a[0]==_D):return _B
+-		return _A
+-	def load_array(H,a):
+-		I=_E;N=[];a=a.strip()
+-		if _I not in a[1:-1]or''!=a[1:-1].split(_I)[0].strip():
+-			Q=H._load_array_isstrarray(a)
+-			if not a[1:-1].strip().startswith(_K):a=a[1:-1].split(_N)
+-			else:
+-				O=[];E=1;A=2;J=1 if a[E]==_K else 0;F=_A
+-				while A<len(a[1:]):
+-					if a[A]==_C or a[A]==_D:
+-						if F:
+-							K=A-1
+-							while K>-1 and a[K]==_H:F=not F;K-=1
+-						F=not F
+-					if not F and a[A]==_K:J+=1
+-					if F or a[A]!=_R:A+=1;continue
+-					elif a[A]==_R and J>1:J-=1;A+=1;continue
+-					A+=1;O.append(a[E:A]);E=A+1
+-					while E<len(a[1:])and a[E]!=_K:E+=1
+-					A=E+1
+-				a=O
+-			B=0
+-			if Q:
+-				while B<len(a)-1:
+-					C=a[B].strip()
+-					while not H.bounded_string(C)or len(C)>2 and C[0]==C[1]==C[2]and C[-2]!=C[0]and C[-3]!=C[0]:
+-						a[B]=a[B]+_N+a[B+1];C=a[B].strip()
+-						if B<len(a)-2:a=a[:B+1]+a[B+2:]
+-						else:a=a[:B+1]
+-					B+=1
+-		else:
+-			G=list(a[1:-1]);a=[];L=0;M=0
+-			for D in _range(len(G)):
+-				if G[D]==_I:L+=1
+-				elif G[D]==_M:L-=1
+-				elif G[D]==_N and not L:a.append(''.join(G[M:D]));M=D+1
+-			a.append(''.join(G[M:]))
+-		for D in _range(len(a)):
+-			a[D]=a[D].strip()
+-			if a[D]!='':
+-				R,P=H.load_value(a[D])
+-				if I:
+-					if P!=I:raise ValueError('Not a homogeneous array')
+-				else:I=P
+-				N.append(R)
+-		return N
+-	def preserve_comment(A,line_no,key,comment,beginline):0
+-	def embed_comments(A,idx,currentlevel):0
+-class TomlPreserveCommentDecoder(TomlDecoder):
+-	def __init__(A,_dict=dict):A.saved_comments={};super(TomlPreserveCommentDecoder,A).__init__(_dict)
+-	def preserve_comment(A,line_no,key,comment,beginline):A.saved_comments[line_no]=key,comment,beginline
+-	def embed_comments(A,idx,currentlevel):
+-		B=currentlevel
+-		if idx not in A.saved_comments:return
+-		C,D,E=A.saved_comments[idx];B[C]=CommentValue(B[C],D,E,A._dict)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/toml/encoder.py b/dynaconf/vendor/toml/encoder.py
+deleted file mode 100644
+index b77dc20..0000000
+--- a/dynaconf/vendor/toml/encoder.py
++++ /dev/null
+@@ -1,134 +0,0 @@
+-_G=']\n'
+-_F=' = '
+-_E='\n'
+-_D=False
+-_C='['
+-_B='.'
+-_A=None
+-import datetime,re,sys
+-from decimal import Decimal
+-from .decoder import InlineTableDict
+-if sys.version_info>=(3,):unicode=str
+-def dump(o,f,encoder=_A):
+-	if not f.write:raise TypeError('You can only dump an object to a file descriptor')
+-	A=dumps(o,encoder=encoder);f.write(A);return A
+-def dumps(o,encoder=_A):
+-	C=encoder;A=''
+-	if C is _A:C=TomlEncoder(o.__class__)
+-	B,D=C.dump_sections(o,'');A+=B;G=[id(o)]
+-	while D:
+-		H=[id(A)for A in D]
+-		for K in G:
+-			if K in H:raise ValueError('Circular reference detected')
+-		G+=H;I=C.get_empty_table()
+-		for E in D:
+-			B,F=C.dump_sections(D[E],E)
+-			if B or not B and not F:
+-				if A and A[-2:]!='\n\n':A+=_E
+-				A+=_C+E+_G
+-				if B:A+=B
+-			for J in F:I[E+_B+J]=F[J]
+-		D=I
+-	return A
+-def _dump_str(v):
+-	G="'";F='\\';C='"'
+-	if sys.version_info<(3,)and hasattr(v,'decode')and isinstance(v,str):v=v.decode('utf-8')
+-	v='%r'%v
+-	if v[0]=='u':v=v[1:]
+-	D=v.startswith(G)
+-	if D or v.startswith(C):v=v[1:-1]
+-	if D:v=v.replace("\\'",G);v=v.replace(C,'\\"')
+-	v=v.split('\\x')
+-	while len(v)>1:
+-		A=-1
+-		if not v[0]:v=v[1:]
+-		v[0]=v[0].replace('\\\\',F);B=v[0][A]!=F
+-		while v[0][:A]and v[0][A]==F:B=not B;A-=1
+-		if B:E='x'
+-		else:E='u00'
+-		v=[v[0]+E+v[1]]+v[2:]
+-	return unicode(C+v[0]+C)
+-def _dump_float(v):return '{}'.format(v).replace('e+0','e+').replace('e-0','e-')
+-def _dump_time(v):
+-	A=v.utcoffset()
+-	if A is _A:return v.isoformat()
+-	return v.isoformat()[:-6]
+-class TomlEncoder:
+-	def __init__(A,_dict=dict,preserve=_D):A._dict=_dict;A.preserve=preserve;A.dump_funcs={str:_dump_str,unicode:_dump_str,list:A.dump_list,bool:lambda v:unicode(v).lower(),int:lambda v:v,float:_dump_float,Decimal:_dump_float,datetime.datetime:lambda v:v.isoformat().replace('+00:00','Z'),datetime.time:_dump_time,datetime.date:lambda v:v.isoformat()}
+-	def get_empty_table(A):return A._dict()
+-	def dump_list(B,v):
+-		A=_C
+-		for C in v:A+=' '+unicode(B.dump_value(C))+','
+-		A+=']';return A
+-	def dump_inline_table(B,section):
+-		A=section;C=''
+-		if isinstance(A,dict):
+-			D=[]
+-			for (E,F) in A.items():G=B.dump_inline_table(F);D.append(E+_F+G)
+-			C+='{ '+', '.join(D)+' }\n';return C
+-		else:return unicode(B.dump_value(A))
+-	def dump_value(B,v):
+-		A=B.dump_funcs.get(type(v))
+-		if A is _A and hasattr(v,'__iter__'):A=B.dump_funcs[list]
+-		return A(v)if A is not _A else B.dump_funcs[str](v)
+-	def dump_sections(C,o,sup):
+-		D=sup;F=''
+-		if D!=''and D[-1]!=_B:D+=_B
+-		M=C._dict();G=''
+-		for A in o:
+-			A=unicode(A);B=A
+-			if not re.match('^[A-Za-z0-9_-]+$',A):B=_dump_str(A)
+-			if not isinstance(o[A],dict):
+-				N=_D
+-				if isinstance(o[A],list):
+-					for L in o[A]:
+-						if isinstance(L,dict):N=True
+-				if N:
+-					for L in o[A]:
+-						H=_E;G+='[['+D+B+']]\n';I,J=C.dump_sections(L,D+B)
+-						if I:
+-							if I[0]==_C:H+=I
+-							else:G+=I
+-						while J:
+-							O=C._dict()
+-							for K in J:
+-								E,P=C.dump_sections(J[K],D+B+_B+K)
+-								if E:H+=_C+D+B+_B+K+_G;H+=E
+-								for E in P:O[K+_B+E]=P[E]
+-							J=O
+-						G+=H
+-				elif o[A]is not _A:F+=B+_F+unicode(C.dump_value(o[A]))+_E
+-			elif C.preserve and isinstance(o[A],InlineTableDict):F+=B+_F+C.dump_inline_table(o[A])
+-			else:M[B]=o[A]
+-		F+=G;return F,M
+-class TomlPreserveInlineDictEncoder(TomlEncoder):
+-	def __init__(A,_dict=dict):super(TomlPreserveInlineDictEncoder,A).__init__(_dict,True)
+-class TomlArraySeparatorEncoder(TomlEncoder):
+-	def __init__(B,_dict=dict,preserve=_D,separator=','):
+-		A=separator;super(TomlArraySeparatorEncoder,B).__init__(_dict,preserve)
+-		if A.strip()=='':A=','+A
+-		elif A.strip(' \t\n\r,'):raise ValueError('Invalid separator for arrays')
+-		B.separator=A
+-	def dump_list(D,v):
+-		B=[];C=_C
+-		for A in v:B.append(D.dump_value(A))
+-		while B!=[]:
+-			E=[]
+-			for A in B:
+-				if isinstance(A,list):
+-					for F in A:E.append(F)
+-				else:C+=' '+unicode(A)+D.separator
+-			B=E
+-		C+=']';return C
+-class TomlNumpyEncoder(TomlEncoder):
+-	def __init__(A,_dict=dict,preserve=_D):import numpy as B;super(TomlNumpyEncoder,A).__init__(_dict,preserve);A.dump_funcs[B.float16]=_dump_float;A.dump_funcs[B.float32]=_dump_float;A.dump_funcs[B.float64]=_dump_float;A.dump_funcs[B.int16]=A._dump_int;A.dump_funcs[B.int32]=A._dump_int;A.dump_funcs[B.int64]=A._dump_int
+-	def _dump_int(A,v):return '{}'.format(int(v))
+-class TomlPreserveCommentEncoder(TomlEncoder):
+-	def __init__(A,_dict=dict,preserve=_D):from dynaconf.vendor.toml.decoder import CommentValue as B;super(TomlPreserveCommentEncoder,A).__init__(_dict,preserve);A.dump_funcs[B]=lambda v:v.dump(A.dump_value)
+-class TomlPathlibEncoder(TomlEncoder):
+-	def _dump_pathlib_path(A,v):return _dump_str(str(v))
+-	def dump_value(A,v):
+-		if(3,4)<=sys.version_info:
+-			import pathlib as B
+-			if isinstance(v,B.PurePath):v=str(v)
+-		return super(TomlPathlibEncoder,A).dump_value(v)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/toml/ordered.py b/dynaconf/vendor/toml/ordered.py
+deleted file mode 100644
+index 0261b32..0000000
+--- a/dynaconf/vendor/toml/ordered.py
++++ /dev/null
+@@ -1,7 +0,0 @@
+-from collections import OrderedDict
+-from .  import TomlEncoder
+-from .  import TomlDecoder
+-class TomlOrderedDecoder(TomlDecoder):
+-	def __init__(A):super(A.__class__,A).__init__(_dict=OrderedDict)
+-class TomlOrderedEncoder(TomlEncoder):
+-	def __init__(A):super(A.__class__,A).__init__(_dict=OrderedDict)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/toml/tz.py b/dynaconf/vendor/toml/tz.py
+deleted file mode 100644
+index 4d6fec9..0000000
+--- a/dynaconf/vendor/toml/tz.py
++++ /dev/null
+@@ -1,10 +0,0 @@
+-from datetime import tzinfo,timedelta
+-class TomlTz(tzinfo):
+-	def __init__(A,toml_offset):
+-		B=toml_offset
+-		if B=='Z':A._raw_offset='+00:00'
+-		else:A._raw_offset=B
+-		A._sign=-1 if A._raw_offset[0]=='-'else 1;A._hours=int(A._raw_offset[1:3]);A._minutes=int(A._raw_offset[4:6])
+-	def tzname(A,dt):return'UTC'+A._raw_offset
+-	def utcoffset(A,dt):return A._sign*timedelta(hours=A._hours,minutes=A._minutes)
+-	def dst(A,dt):return timedelta(0)
+\ No newline at end of file
+diff --git a/dynaconf/vendor/vendor.txt b/dynaconf/vendor/vendor.txt
+index add308d..daa2b60 100644
+--- a/dynaconf/vendor/vendor.txt
++++ b/dynaconf/vendor/vendor.txt
+@@ -1,5 +1 @@
+ python-box==4.2.3
+-toml==0.10.8
+-click==7.1.x
+-python-dotenv==0.13.0
+-ruamel.yaml==0.16.10
+diff --git a/dynaconf/vendor_src/box/converters.py b/dynaconf/vendor_src/box/converters.py
+index c9a2293..ae42bf6 100644
+--- a/dynaconf/vendor_src/box/converters.py
++++ b/dynaconf/vendor_src/box/converters.py
+@@ -9,9 +9,9 @@ import sys
+ import warnings
+ from pathlib import Path
+ 
+-import dynaconf.vendor.ruamel.yaml as yaml
++import ruamel.yaml as yaml
+ from dynaconf.vendor.box.exceptions import BoxError, BoxWarning
+-from dynaconf.vendor import toml
++import toml
+ 
+ 
+ BOX_PARAMETERS = ('default_box', 'default_box_attr', 'conversion_box',
+diff --git a/dynaconf/vendor_src/box/from_file.py b/dynaconf/vendor_src/box/from_file.py
+index 2e2a6ad..3f76819 100644
+--- a/dynaconf/vendor_src/box/from_file.py
++++ b/dynaconf/vendor_src/box/from_file.py
+@@ -3,8 +3,8 @@
+ from json import JSONDecodeError
+ from pathlib import Path
+ from typing import Union
+-from dynaconf.vendor.toml import TomlDecodeError
+-from dynaconf.vendor.ruamel.yaml import YAMLError
++from toml import TomlDecodeError
++from ruamel.yaml import YAMLError
+ 
+ 
+ from .exceptions import BoxError
+diff --git a/dynaconf/vendor_src/click/README.md b/dynaconf/vendor_src/click/README.md
+deleted file mode 100644
+index 0f7bac3..0000000
+--- a/dynaconf/vendor_src/click/README.md
++++ /dev/null
+@@ -1,5 +0,0 @@
+-## python-click
+-
+-Vendored dep taken from: https://github.com/pallets/click
+-Licensed under MIT: https://github.com/pallets/clickl/blob/master/LICENSE
+-Current version: 7.1.x
+diff --git a/dynaconf/vendor_src/click/__init__.py b/dynaconf/vendor_src/click/__init__.py
+deleted file mode 100644
+index 9cd0129..0000000
+--- a/dynaconf/vendor_src/click/__init__.py
++++ /dev/null
+@@ -1,75 +0,0 @@
+-"""
+-Click is a simple Python module inspired by the stdlib optparse to make
+-writing command line scripts fun. Unlike other modules, it's based
+-around a simple API that does not come with too much magic and is
+-composable.
+-"""
+-from .core import Argument
+-from .core import BaseCommand
+-from .core import Command
+-from .core import CommandCollection
+-from .core import Context
+-from .core import Group
+-from .core import MultiCommand
+-from .core import Option
+-from .core import Parameter
+-from .decorators import argument
+-from .decorators import command
+-from .decorators import confirmation_option
+-from .decorators import group
+-from .decorators import help_option
+-from .decorators import make_pass_decorator
+-from .decorators import option
+-from .decorators import pass_context
+-from .decorators import pass_obj
+-from .decorators import password_option
+-from .decorators import version_option
+-from .exceptions import Abort
+-from .exceptions import BadArgumentUsage
+-from .exceptions import BadOptionUsage
+-from .exceptions import BadParameter
+-from .exceptions import ClickException
+-from .exceptions import FileError
+-from .exceptions import MissingParameter
+-from .exceptions import NoSuchOption
+-from .exceptions import UsageError
+-from .formatting import HelpFormatter
+-from .formatting import wrap_text
+-from .globals import get_current_context
+-from .parser import OptionParser
+-from .termui import clear
+-from .termui import confirm
+-from .termui import echo_via_pager
+-from .termui import edit
+-from .termui import get_terminal_size
+-from .termui import getchar
+-from .termui import launch
+-from .termui import pause
+-from .termui import progressbar
+-from .termui import prompt
+-from .termui import secho
+-from .termui import style
+-from .termui import unstyle
+-from .types import BOOL
+-from .types import Choice
+-from .types import DateTime
+-from .types import File
+-from .types import FLOAT
+-from .types import FloatRange
+-from .types import INT
+-from .types import IntRange
+-from .types import ParamType
+-from .types import Path
+-from .types import STRING
+-from .types import Tuple
+-from .types import UNPROCESSED
+-from .types import UUID
+-from .utils import echo
+-from .utils import format_filename
+-from .utils import get_app_dir
+-from .utils import get_binary_stream
+-from .utils import get_os_args
+-from .utils import get_text_stream
+-from .utils import open_file
+-
+-__version__ = "8.0.0.dev"
+diff --git a/dynaconf/vendor_src/click/_bashcomplete.py b/dynaconf/vendor_src/click/_bashcomplete.py
+deleted file mode 100644
+index b9e4900..0000000
+--- a/dynaconf/vendor_src/click/_bashcomplete.py
++++ /dev/null
+@@ -1,371 +0,0 @@
+-import copy
+-import os
+-import re
+-from collections import abc
+-
+-from .core import Argument
+-from .core import MultiCommand
+-from .core import Option
+-from .parser import split_arg_string
+-from .types import Choice
+-from .utils import echo
+-
+-WORDBREAK = "="
+-
+-# Note, only BASH version 4.4 and later have the nosort option.
+-COMPLETION_SCRIPT_BASH = """
+-%(complete_func)s() {
+-    local IFS=$'\n'
+-    COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\
+-                   COMP_CWORD=$COMP_CWORD \\
+-                   %(autocomplete_var)s=complete $1 ) )
+-    return 0
+-}
+-
+-%(complete_func)setup() {
+-    local COMPLETION_OPTIONS=""
+-    local BASH_VERSION_ARR=(${BASH_VERSION//./ })
+-    # Only BASH version 4.4 and later have the nosort option.
+-    if [ ${BASH_VERSION_ARR[0]} -gt 4 ] || ([ ${BASH_VERSION_ARR[0]} -eq 4 ] \
+-&& [ ${BASH_VERSION_ARR[1]} -ge 4 ]); then
+-        COMPLETION_OPTIONS="-o nosort"
+-    fi
+-
+-    complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s
+-}
+-
+-%(complete_func)setup
+-"""
+-
+-COMPLETION_SCRIPT_ZSH = """
+-#compdef %(script_names)s
+-
+-%(complete_func)s() {
+-    local -a completions
+-    local -a completions_with_descriptions
+-    local -a response
+-    (( ! $+commands[%(script_names)s] )) && return 1
+-
+-    response=("${(@f)$( env COMP_WORDS=\"${words[*]}\" \\
+-                        COMP_CWORD=$((CURRENT-1)) \\
+-                        %(autocomplete_var)s=\"complete_zsh\" \\
+-                        %(script_names)s )}")
+-
+-    for key descr in ${(kv)response}; do
+-      if [[ "$descr" == "_" ]]; then
+-          completions+=("$key")
+-      else
+-          completions_with_descriptions+=("$key":"$descr")
+-      fi
+-    done
+-
+-    if [ -n "$completions_with_descriptions" ]; then
+-        _describe -V unsorted completions_with_descriptions -U
+-    fi
+-
+-    if [ -n "$completions" ]; then
+-        compadd -U -V unsorted -a completions
+-    fi
+-    compstate[insert]="automenu"
+-}
+-
+-compdef %(complete_func)s %(script_names)s
+-"""
+-
+-COMPLETION_SCRIPT_FISH = (
+-    "complete --no-files --command %(script_names)s --arguments"
+-    ' "(env %(autocomplete_var)s=complete_fish'
+-    " COMP_WORDS=(commandline -cp) COMP_CWORD=(commandline -t)"
+-    ' %(script_names)s)"'
+-)
+-
+-_completion_scripts = {
+-    "bash": COMPLETION_SCRIPT_BASH,
+-    "zsh": COMPLETION_SCRIPT_ZSH,
+-    "fish": COMPLETION_SCRIPT_FISH,
+-}
+-
+-_invalid_ident_char_re = re.compile(r"[^a-zA-Z0-9_]")
+-
+-
+-def get_completion_script(prog_name, complete_var, shell):
+-    cf_name = _invalid_ident_char_re.sub("", prog_name.replace("-", "_"))
+-    script = _completion_scripts.get(shell, COMPLETION_SCRIPT_BASH)
+-    return (
+-        script
+-        % {
+-            "complete_func": f"_{cf_name}_completion",
+-            "script_names": prog_name,
+-            "autocomplete_var": complete_var,
+-        }
+-    ).strip() + ";"
+-
+-
+-def resolve_ctx(cli, prog_name, args):
+-    """Parse into a hierarchy of contexts. Contexts are connected
+-    through the parent variable.
+-
+-    :param cli: command definition
+-    :param prog_name: the program that is running
+-    :param args: full list of args
+-    :return: the final context/command parsed
+-    """
+-    ctx = cli.make_context(prog_name, args, resilient_parsing=True)
+-    args = ctx.protected_args + ctx.args
+-    while args:
+-        if isinstance(ctx.command, MultiCommand):
+-            if not ctx.command.chain:
+-                cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)
+-                if cmd is None:
+-                    return ctx
+-                ctx = cmd.make_context(
+-                    cmd_name, args, parent=ctx, resilient_parsing=True
+-                )
+-                args = ctx.protected_args + ctx.args
+-            else:
+-                # Walk chained subcommand contexts saving the last one.
+-                while args:
+-                    cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)
+-                    if cmd is None:
+-                        return ctx
+-                    sub_ctx = cmd.make_context(
+-                        cmd_name,
+-                        args,
+-                        parent=ctx,
+-                        allow_extra_args=True,
+-                        allow_interspersed_args=False,
+-                        resilient_parsing=True,
+-                    )
+-                    args = sub_ctx.args
+-                ctx = sub_ctx
+-                args = sub_ctx.protected_args + sub_ctx.args
+-        else:
+-            break
+-    return ctx
+-
+-
+-def start_of_option(param_str):
+-    """
+-    :param param_str: param_str to check
+-    :return: whether or not this is the start of an option declaration
+-        (i.e. starts "-" or "--")
+-    """
+-    return param_str and param_str[:1] == "-"
+-
+-
+-def is_incomplete_option(all_args, cmd_param):
+-    """
+-    :param all_args: the full original list of args supplied
+-    :param cmd_param: the current command parameter
+-    :return: whether or not the last option declaration (i.e. starts
+-        "-" or "--") is incomplete and corresponds to this cmd_param. In
+-        other words whether this cmd_param option can still accept
+-        values
+-    """
+-    if not isinstance(cmd_param, Option):
+-        return False
+-    if cmd_param.is_flag:
+-        return False
+-    last_option = None
+-    for index, arg_str in enumerate(
+-        reversed([arg for arg in all_args if arg != WORDBREAK])
+-    ):
+-        if index + 1 > cmd_param.nargs:
+-            break
+-        if start_of_option(arg_str):
+-            last_option = arg_str
+-
+-    return True if last_option and last_option in cmd_param.opts else False
+-
+-
+-def is_incomplete_argument(current_params, cmd_param):
+-    """
+-    :param current_params: the current params and values for this
+-        argument as already entered
+-    :param cmd_param: the current command parameter
+-    :return: whether or not the last argument is incomplete and
+-        corresponds to this cmd_param. In other words whether or not the
+-        this cmd_param argument can still accept values
+-    """
+-    if not isinstance(cmd_param, Argument):
+-        return False
+-    current_param_values = current_params[cmd_param.name]
+-    if current_param_values is None:
+-        return True
+-    if cmd_param.nargs == -1:
+-        return True
+-    if (
+-        isinstance(current_param_values, abc.Iterable)
+-        and cmd_param.nargs > 1
+-        and len(current_param_values) < cmd_param.nargs
+-    ):
+-        return True
+-    return False
+-
+-
+-def get_user_autocompletions(ctx, args, incomplete, cmd_param):
+-    """
+-    :param ctx: context associated with the parsed command
+-    :param args: full list of args
+-    :param incomplete: the incomplete text to autocomplete
+-    :param cmd_param: command definition
+-    :return: all the possible user-specified completions for the param
+-    """
+-    results = []
+-    if isinstance(cmd_param.type, Choice):
+-        # Choices don't support descriptions.
+-        results = [
+-            (c, None) for c in cmd_param.type.choices if str(c).startswith(incomplete)
+-        ]
+-    elif cmd_param.autocompletion is not None:
+-        dynamic_completions = cmd_param.autocompletion(
+-            ctx=ctx, args=args, incomplete=incomplete
+-        )
+-        results = [
+-            c if isinstance(c, tuple) else (c, None) for c in dynamic_completions
+-        ]
+-    return results
+-
+-
+-def get_visible_commands_starting_with(ctx, starts_with):
+-    """
+-    :param ctx: context associated with the parsed command
+-    :starts_with: string that visible commands must start with.
+-    :return: all visible (not hidden) commands that start with starts_with.
+-    """
+-    for c in ctx.command.list_commands(ctx):
+-        if c.startswith(starts_with):
+-            command = ctx.command.get_command(ctx, c)
+-            if not command.hidden:
+-                yield command
+-
+-
+-def add_subcommand_completions(ctx, incomplete, completions_out):
+-    # Add subcommand completions.
+-    if isinstance(ctx.command, MultiCommand):
+-        completions_out.extend(
+-            [
+-                (c.name, c.get_short_help_str())
+-                for c in get_visible_commands_starting_with(ctx, incomplete)
+-            ]
+-        )
+-
+-    # Walk up the context list and add any other completion
+-    # possibilities from chained commands
+-    while ctx.parent is not None:
+-        ctx = ctx.parent
+-        if isinstance(ctx.command, MultiCommand) and ctx.command.chain:
+-            remaining_commands = [
+-                c
+-                for c in get_visible_commands_starting_with(ctx, incomplete)
+-                if c.name not in ctx.protected_args
+-            ]
+-            completions_out.extend(
+-                [(c.name, c.get_short_help_str()) for c in remaining_commands]
+-            )
+-
+-
+-def get_choices(cli, prog_name, args, incomplete):
+-    """
+-    :param cli: command definition
+-    :param prog_name: the program that is running
+-    :param args: full list of args
+-    :param incomplete: the incomplete text to autocomplete
+-    :return: all the possible completions for the incomplete
+-    """
+-    all_args = copy.deepcopy(args)
+-
+-    ctx = resolve_ctx(cli, prog_name, args)
+-    if ctx is None:
+-        return []
+-
+-    has_double_dash = "--" in all_args
+-
+-    # In newer versions of bash long opts with '='s are partitioned, but
+-    # it's easier to parse without the '='
+-    if start_of_option(incomplete) and WORDBREAK in incomplete:
+-        partition_incomplete = incomplete.partition(WORDBREAK)
+-        all_args.append(partition_incomplete[0])
+-        incomplete = partition_incomplete[2]
+-    elif incomplete == WORDBREAK:
+-        incomplete = ""
+-
+-    completions = []
+-    if not has_double_dash and start_of_option(incomplete):
+-        # completions for partial options
+-        for param in ctx.command.params:
+-            if isinstance(param, Option) and not param.hidden:
+-                param_opts = [
+-                    param_opt
+-                    for param_opt in param.opts + param.secondary_opts
+-                    if param_opt not in all_args or param.multiple
+-                ]
+-                completions.extend(
+-                    [(o, param.help) for o in param_opts if o.startswith(incomplete)]
+-                )
+-        return completions
+-    # completion for option values from user supplied values
+-    for param in ctx.command.params:
+-        if is_incomplete_option(all_args, param):
+-            return get_user_autocompletions(ctx, all_args, incomplete, param)
+-    # completion for argument values from user supplied values
+-    for param in ctx.command.params:
+-        if is_incomplete_argument(ctx.params, param):
+-            return get_user_autocompletions(ctx, all_args, incomplete, param)
+-
+-    add_subcommand_completions(ctx, incomplete, completions)
+-    # Sort before returning so that proper ordering can be enforced in custom types.
+-    return sorted(completions)
+-
+-
+-def do_complete(cli, prog_name, include_descriptions):
+-    cwords = split_arg_string(os.environ["COMP_WORDS"])
+-    cword = int(os.environ["COMP_CWORD"])
+-    args = cwords[1:cword]
+-    try:
+-        incomplete = cwords[cword]
+-    except IndexError:
+-        incomplete = ""
+-
+-    for item in get_choices(cli, prog_name, args, incomplete):
+-        echo(item[0])
+-        if include_descriptions:
+-            # ZSH has trouble dealing with empty array parameters when
+-            # returned from commands, use '_' to indicate no description
+-            # is present.
+-            echo(item[1] if item[1] else "_")
+-
+-    return True
+-
+-
+-def do_complete_fish(cli, prog_name):
+-    cwords = split_arg_string(os.environ["COMP_WORDS"])
+-    incomplete = os.environ["COMP_CWORD"]
+-    args = cwords[1:]
+-
+-    for item in get_choices(cli, prog_name, args, incomplete):
+-        if item[1]:
+-            echo(f"{item[0]}\t{item[1]}")
+-        else:
+-            echo(item[0])
+-
+-    return True
+-
+-
+-def bashcomplete(cli, prog_name, complete_var, complete_instr):
+-    if "_" in complete_instr:
+-        command, shell = complete_instr.split("_", 1)
+-    else:
+-        command = complete_instr
+-        shell = "bash"
+-
+-    if command == "source":
+-        echo(get_completion_script(prog_name, complete_var, shell))
+-        return True
+-    elif command == "complete":
+-        if shell == "fish":
+-            return do_complete_fish(cli, prog_name)
+-        elif shell in {"bash", "zsh"}:
+-            return do_complete(cli, prog_name, shell == "zsh")
+-
+-    return False
+diff --git a/dynaconf/vendor_src/click/_compat.py b/dynaconf/vendor_src/click/_compat.py
+deleted file mode 100644
+index 85568ca..0000000
+--- a/dynaconf/vendor_src/click/_compat.py
++++ /dev/null
+@@ -1,611 +0,0 @@
+-import codecs
+-import io
+-import os
+-import re
+-import sys
+-from weakref import WeakKeyDictionary
+-
+-CYGWIN = sys.platform.startswith("cygwin")
+-MSYS2 = sys.platform.startswith("win") and ("GCC" in sys.version)
+-# Determine local App Engine environment, per Google's own suggestion
+-APP_ENGINE = "APPENGINE_RUNTIME" in os.environ and "Development/" in os.environ.get(
+-    "SERVER_SOFTWARE", ""
+-)
+-WIN = sys.platform.startswith("win") and not APP_ENGINE and not MSYS2
+-DEFAULT_COLUMNS = 80
+-auto_wrap_for_ansi = None
+-colorama = None
+-get_winterm_size = None
+-_ansi_re = re.compile(r"\033\[[;?0-9]*[a-zA-Z]")
+-
+-
+-def get_filesystem_encoding():
+-    return sys.getfilesystemencoding() or sys.getdefaultencoding()
+-
+-
+-def _make_text_stream(
+-    stream, encoding, errors, force_readable=False, force_writable=False
+-):
+-    if encoding is None:
+-        encoding = get_best_encoding(stream)
+-    if errors is None:
+-        errors = "replace"
+-    return _NonClosingTextIOWrapper(
+-        stream,
+-        encoding,
+-        errors,
+-        line_buffering=True,
+-        force_readable=force_readable,
+-        force_writable=force_writable,
+-    )
+-
+-
+-def is_ascii_encoding(encoding):
+-    """Checks if a given encoding is ascii."""
+-    try:
+-        return codecs.lookup(encoding).name == "ascii"
+-    except LookupError:
+-        return False
+-
+-
+-def get_best_encoding(stream):
+-    """Returns the default stream encoding if not found."""
+-    rv = getattr(stream, "encoding", None) or sys.getdefaultencoding()
+-    if is_ascii_encoding(rv):
+-        return "utf-8"
+-    return rv
+-
+-
+-class _NonClosingTextIOWrapper(io.TextIOWrapper):
+-    def __init__(
+-        self,
+-        stream,
+-        encoding,
+-        errors,
+-        force_readable=False,
+-        force_writable=False,
+-        **extra,
+-    ):
+-        self._stream = stream = _FixupStream(stream, force_readable, force_writable)
+-        super().__init__(stream, encoding, errors, **extra)
+-
+-    def __del__(self):
+-        try:
+-            self.detach()
+-        except Exception:
+-            pass
+-
+-    def isatty(self):
+-        # https://bitbucket.org/pypy/pypy/issue/1803
+-        return self._stream.isatty()
+-
+-
+-class _FixupStream:
+-    """The new io interface needs more from streams than streams
+-    traditionally implement.  As such, this fix-up code is necessary in
+-    some circumstances.
+-
+-    The forcing of readable and writable flags are there because some tools
+-    put badly patched objects on sys (one such offender are certain version
+-    of jupyter notebook).
+-    """
+-
+-    def __init__(self, stream, force_readable=False, force_writable=False):
+-        self._stream = stream
+-        self._force_readable = force_readable
+-        self._force_writable = force_writable
+-
+-    def __getattr__(self, name):
+-        return getattr(self._stream, name)
+-
+-    def read1(self, size):
+-        f = getattr(self._stream, "read1", None)
+-        if f is not None:
+-            return f(size)
+-
+-        return self._stream.read(size)
+-
+-    def readable(self):
+-        if self._force_readable:
+-            return True
+-        x = getattr(self._stream, "readable", None)
+-        if x is not None:
+-            return x()
+-        try:
+-            self._stream.read(0)
+-        except Exception:
+-            return False
+-        return True
+-
+-    def writable(self):
+-        if self._force_writable:
+-            return True
+-        x = getattr(self._stream, "writable", None)
+-        if x is not None:
+-            return x()
+-        try:
+-            self._stream.write("")
+-        except Exception:
+-            try:
+-                self._stream.write(b"")
+-            except Exception:
+-                return False
+-        return True
+-
+-    def seekable(self):
+-        x = getattr(self._stream, "seekable", None)
+-        if x is not None:
+-            return x()
+-        try:
+-            self._stream.seek(self._stream.tell())
+-        except Exception:
+-            return False
+-        return True
+-
+-
+-def is_bytes(x):
+-    return isinstance(x, (bytes, memoryview, bytearray))
+-
+-
+-def _is_binary_reader(stream, default=False):
+-    try:
+-        return isinstance(stream.read(0), bytes)
+-    except Exception:
+-        return default
+-        # This happens in some cases where the stream was already
+-        # closed.  In this case, we assume the default.
+-
+-
+-def _is_binary_writer(stream, default=False):
+-    try:
+-        stream.write(b"")
+-    except Exception:
+-        try:
+-            stream.write("")
+-            return False
+-        except Exception:
+-            pass
+-        return default
+-    return True
+-
+-
+-def _find_binary_reader(stream):
+-    # We need to figure out if the given stream is already binary.
+-    # This can happen because the official docs recommend detaching
+-    # the streams to get binary streams.  Some code might do this, so
+-    # we need to deal with this case explicitly.
+-    if _is_binary_reader(stream, False):
+-        return stream
+-
+-    buf = getattr(stream, "buffer", None)
+-
+-    # Same situation here; this time we assume that the buffer is
+-    # actually binary in case it's closed.
+-    if buf is not None and _is_binary_reader(buf, True):
+-        return buf
+-
+-
+-def _find_binary_writer(stream):
+-    # We need to figure out if the given stream is already binary.
+-    # This can happen because the official docs recommend detaching
+-    # the streams to get binary streams.  Some code might do this, so
+-    # we need to deal with this case explicitly.
+-    if _is_binary_writer(stream, False):
+-        return stream
+-
+-    buf = getattr(stream, "buffer", None)
+-
+-    # Same situation here; this time we assume that the buffer is
+-    # actually binary in case it's closed.
+-    if buf is not None and _is_binary_writer(buf, True):
+-        return buf
+-
+-
+-def _stream_is_misconfigured(stream):
+-    """A stream is misconfigured if its encoding is ASCII."""
+-    # If the stream does not have an encoding set, we assume it's set
+-    # to ASCII.  This appears to happen in certain unittest
+-    # environments.  It's not quite clear what the correct behavior is
+-    # but this at least will force Click to recover somehow.
+-    return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii")
+-
+-
+-def _is_compat_stream_attr(stream, attr, value):
+-    """A stream attribute is compatible if it is equal to the
+-    desired value or the desired value is unset and the attribute
+-    has a value.
+-    """
+-    stream_value = getattr(stream, attr, None)
+-    return stream_value == value or (value is None and stream_value is not None)
+-
+-
+-def _is_compatible_text_stream(stream, encoding, errors):
+-    """Check if a stream's encoding and errors attributes are
+-    compatible with the desired values.
+-    """
+-    return _is_compat_stream_attr(
+-        stream, "encoding", encoding
+-    ) and _is_compat_stream_attr(stream, "errors", errors)
+-
+-
+-def _force_correct_text_stream(
+-    text_stream,
+-    encoding,
+-    errors,
+-    is_binary,
+-    find_binary,
+-    force_readable=False,
+-    force_writable=False,
+-):
+-    if is_binary(text_stream, False):
+-        binary_reader = text_stream
+-    else:
+-        # If the stream looks compatible, and won't default to a
+-        # misconfigured ascii encoding, return it as-is.
+-        if _is_compatible_text_stream(text_stream, encoding, errors) and not (
+-            encoding is None and _stream_is_misconfigured(text_stream)
+-        ):
+-            return text_stream
+-
+-        # Otherwise, get the underlying binary reader.
+-        binary_reader = find_binary(text_stream)
+-
+-        # If that's not possible, silently use the original reader
+-        # and get mojibake instead of exceptions.
+-        if binary_reader is None:
+-            return text_stream
+-
+-    # Default errors to replace instead of strict in order to get
+-    # something that works.
+-    if errors is None:
+-        errors = "replace"
+-
+-    # Wrap the binary stream in a text stream with the correct
+-    # encoding parameters.
+-    return _make_text_stream(
+-        binary_reader,
+-        encoding,
+-        errors,
+-        force_readable=force_readable,
+-        force_writable=force_writable,
+-    )
+-
+-
+-def _force_correct_text_reader(text_reader, encoding, errors, force_readable=False):
+-    return _force_correct_text_stream(
+-        text_reader,
+-        encoding,
+-        errors,
+-        _is_binary_reader,
+-        _find_binary_reader,
+-        force_readable=force_readable,
+-    )
+-
+-
+-def _force_correct_text_writer(text_writer, encoding, errors, force_writable=False):
+-    return _force_correct_text_stream(
+-        text_writer,
+-        encoding,
+-        errors,
+-        _is_binary_writer,
+-        _find_binary_writer,
+-        force_writable=force_writable,
+-    )
+-
+-
+-def get_binary_stdin():
+-    reader = _find_binary_reader(sys.stdin)
+-    if reader is None:
+-        raise RuntimeError("Was not able to determine binary stream for sys.stdin.")
+-    return reader
+-
+-
+-def get_binary_stdout():
+-    writer = _find_binary_writer(sys.stdout)
+-    if writer is None:
+-        raise RuntimeError("Was not able to determine binary stream for sys.stdout.")
+-    return writer
+-
+-
+-def get_binary_stderr():
+-    writer = _find_binary_writer(sys.stderr)
+-    if writer is None:
+-        raise RuntimeError("Was not able to determine binary stream for sys.stderr.")
+-    return writer
+-
+-
+-def get_text_stdin(encoding=None, errors=None):
+-    rv = _get_windows_console_stream(sys.stdin, encoding, errors)
+-    if rv is not None:
+-        return rv
+-    return _force_correct_text_reader(sys.stdin, encoding, errors, force_readable=True)
+-
+-
+-def get_text_stdout(encoding=None, errors=None):
+-    rv = _get_windows_console_stream(sys.stdout, encoding, errors)
+-    if rv is not None:
+-        return rv
+-    return _force_correct_text_writer(sys.stdout, encoding, errors, force_writable=True)
+-
+-
+-def get_text_stderr(encoding=None, errors=None):
+-    rv = _get_windows_console_stream(sys.stderr, encoding, errors)
+-    if rv is not None:
+-        return rv
+-    return _force_correct_text_writer(sys.stderr, encoding, errors, force_writable=True)
+-
+-
+-def filename_to_ui(value):
+-    if isinstance(value, bytes):
+-        value = value.decode(get_filesystem_encoding(), "replace")
+-    else:
+-        value = value.encode("utf-8", "surrogateescape").decode("utf-8", "replace")
+-    return value
+-
+-
+-def get_strerror(e, default=None):
+-    if hasattr(e, "strerror"):
+-        msg = e.strerror
+-    else:
+-        if default is not None:
+-            msg = default
+-        else:
+-            msg = str(e)
+-    if isinstance(msg, bytes):
+-        msg = msg.decode("utf-8", "replace")
+-    return msg
+-
+-
+-def _wrap_io_open(file, mode, encoding, errors):
+-    """Handles not passing ``encoding`` and ``errors`` in binary mode."""
+-    if "b" in mode:
+-        return open(file, mode)
+-
+-    return open(file, mode, encoding=encoding, errors=errors)
+-
+-
+-def open_stream(filename, mode="r", encoding=None, errors="strict", atomic=False):
+-    binary = "b" in mode
+-
+-    # Standard streams first.  These are simple because they don't need
+-    # special handling for the atomic flag.  It's entirely ignored.
+-    if filename == "-":
+-        if any(m in mode for m in ["w", "a", "x"]):
+-            if binary:
+-                return get_binary_stdout(), False
+-            return get_text_stdout(encoding=encoding, errors=errors), False
+-        if binary:
+-            return get_binary_stdin(), False
+-        return get_text_stdin(encoding=encoding, errors=errors), False
+-
+-    # Non-atomic writes directly go out through the regular open functions.
+-    if not atomic:
+-        return _wrap_io_open(filename, mode, encoding, errors), True
+-
+-    # Some usability stuff for atomic writes
+-    if "a" in mode:
+-        raise ValueError(
+-            "Appending to an existing file is not supported, because that"
+-            " would involve an expensive `copy`-operation to a temporary"
+-            " file. Open the file in normal `w`-mode and copy explicitly"
+-            " if that's what you're after."
+-        )
+-    if "x" in mode:
+-        raise ValueError("Use the `overwrite`-parameter instead.")
+-    if "w" not in mode:
+-        raise ValueError("Atomic writes only make sense with `w`-mode.")
+-
+-    # Atomic writes are more complicated.  They work by opening a file
+-    # as a proxy in the same folder and then using the fdopen
+-    # functionality to wrap it in a Python file.  Then we wrap it in an
+-    # atomic file that moves the file over on close.
+-    import errno
+-    import random
+-
+-    try:
+-        perm = os.stat(filename).st_mode
+-    except OSError:
+-        perm = None
+-
+-    flags = os.O_RDWR | os.O_CREAT | os.O_EXCL
+-
+-    if binary:
+-        flags |= getattr(os, "O_BINARY", 0)
+-
+-    while True:
+-        tmp_filename = os.path.join(
+-            os.path.dirname(filename),
+-            f".__atomic-write{random.randrange(1 << 32):08x}",
+-        )
+-        try:
+-            fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm)
+-            break
+-        except OSError as e:
+-            if e.errno == errno.EEXIST or (
+-                os.name == "nt"
+-                and e.errno == errno.EACCES
+-                and os.path.isdir(e.filename)
+-                and os.access(e.filename, os.W_OK)
+-            ):
+-                continue
+-            raise
+-
+-    if perm is not None:
+-        os.chmod(tmp_filename, perm)  # in case perm includes bits in umask
+-
+-    f = _wrap_io_open(fd, mode, encoding, errors)
+-    return _AtomicFile(f, tmp_filename, os.path.realpath(filename)), True
+-
+-
+-class _AtomicFile:
+-    def __init__(self, f, tmp_filename, real_filename):
+-        self._f = f
+-        self._tmp_filename = tmp_filename
+-        self._real_filename = real_filename
+-        self.closed = False
+-
+-    @property
+-    def name(self):
+-        return self._real_filename
+-
+-    def close(self, delete=False):
+-        if self.closed:
+-            return
+-        self._f.close()
+-        os.replace(self._tmp_filename, self._real_filename)
+-        self.closed = True
+-
+-    def __getattr__(self, name):
+-        return getattr(self._f, name)
+-
+-    def __enter__(self):
+-        return self
+-
+-    def __exit__(self, exc_type, exc_value, tb):
+-        self.close(delete=exc_type is not None)
+-
+-    def __repr__(self):
+-        return repr(self._f)
+-
+-
+-def strip_ansi(value):
+-    return _ansi_re.sub("", value)
+-
+-
+-def _is_jupyter_kernel_output(stream):
+-    if WIN:
+-        # TODO: Couldn't test on Windows, should't try to support until
+-        # someone tests the details wrt colorama.
+-        return
+-
+-    while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)):
+-        stream = stream._stream
+-
+-    return stream.__class__.__module__.startswith("ipykernel.")
+-
+-
+-def should_strip_ansi(stream=None, color=None):
+-    if color is None:
+-        if stream is None:
+-            stream = sys.stdin
+-        return not isatty(stream) and not _is_jupyter_kernel_output(stream)
+-    return not color
+-
+-
+-# If we're on Windows, we provide transparent integration through
+-# colorama.  This will make ANSI colors through the echo function
+-# work automatically.
+-if WIN:
+-    # Windows has a smaller terminal
+-    DEFAULT_COLUMNS = 79
+-
+-    from ._winconsole import _get_windows_console_stream
+-
+-    def _get_argv_encoding():
+-        import locale
+-
+-        return locale.getpreferredencoding()
+-
+-    try:
+-        import colorama
+-    except ImportError:
+-        pass
+-    else:
+-        _ansi_stream_wrappers = WeakKeyDictionary()
+-
+-        def auto_wrap_for_ansi(stream, color=None):
+-            """This function wraps a stream so that calls through colorama
+-            are issued to the win32 console API to recolor on demand.  It
+-            also ensures to reset the colors if a write call is interrupted
+-            to not destroy the console afterwards.
+-            """
+-            try:
+-                cached = _ansi_stream_wrappers.get(stream)
+-            except Exception:
+-                cached = None
+-            if cached is not None:
+-                return cached
+-            strip = should_strip_ansi(stream, color)
+-            ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip)
+-            rv = ansi_wrapper.stream
+-            _write = rv.write
+-
+-            def _safe_write(s):
+-                try:
+-                    return _write(s)
+-                except BaseException:
+-                    ansi_wrapper.reset_all()
+-                    raise
+-
+-            rv.write = _safe_write
+-            try:
+-                _ansi_stream_wrappers[stream] = rv
+-            except Exception:
+-                pass
+-            return rv
+-
+-        def get_winterm_size():
+-            win = colorama.win32.GetConsoleScreenBufferInfo(
+-                colorama.win32.STDOUT
+-            ).srWindow
+-            return win.Right - win.Left, win.Bottom - win.Top
+-
+-
+-else:
+-
+-    def _get_argv_encoding():
+-        return getattr(sys.stdin, "encoding", None) or get_filesystem_encoding()
+-
+-    def _get_windows_console_stream(f, encoding, errors):
+-        return None
+-
+-
+-def term_len(x):
+-    return len(strip_ansi(x))
+-
+-
+-def isatty(stream):
+-    try:
+-        return stream.isatty()
+-    except Exception:
+-        return False
+-
+-
+-def _make_cached_stream_func(src_func, wrapper_func):
+-    cache = WeakKeyDictionary()
+-
+-    def func():
+-        stream = src_func()
+-        try:
+-            rv = cache.get(stream)
+-        except Exception:
+-            rv = None
+-        if rv is not None:
+-            return rv
+-        rv = wrapper_func()
+-        try:
+-            stream = src_func()  # In case wrapper_func() modified the stream
+-            cache[stream] = rv
+-        except Exception:
+-            pass
+-        return rv
+-
+-    return func
+-
+-
+-_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin)
+-_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout)
+-_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr)
+-
+-
+-binary_streams = {
+-    "stdin": get_binary_stdin,
+-    "stdout": get_binary_stdout,
+-    "stderr": get_binary_stderr,
+-}
+-
+-text_streams = {
+-    "stdin": get_text_stdin,
+-    "stdout": get_text_stdout,
+-    "stderr": get_text_stderr,
+-}
+diff --git a/dynaconf/vendor_src/click/_termui_impl.py b/dynaconf/vendor_src/click/_termui_impl.py
+deleted file mode 100644
+index 7837250..0000000
+--- a/dynaconf/vendor_src/click/_termui_impl.py
++++ /dev/null
+@@ -1,667 +0,0 @@
+-"""
+-This module contains implementations for the termui module. To keep the
+-import time of Click down, some infrequently used functionality is
+-placed in this module and only imported as needed.
+-"""
+-import contextlib
+-import math
+-import os
+-import sys
+-import time
+-
+-from ._compat import _default_text_stdout
+-from ._compat import CYGWIN
+-from ._compat import get_best_encoding
+-from ._compat import isatty
+-from ._compat import open_stream
+-from ._compat import strip_ansi
+-from ._compat import term_len
+-from ._compat import WIN
+-from .exceptions import ClickException
+-from .utils import echo
+-
+-if os.name == "nt":
+-    BEFORE_BAR = "\r"
+-    AFTER_BAR = "\n"
+-else:
+-    BEFORE_BAR = "\r\033[?25l"
+-    AFTER_BAR = "\033[?25h\n"
+-
+-
+-def _length_hint(obj):
+-    """Returns the length hint of an object."""
+-    try:
+-        return len(obj)
+-    except (AttributeError, TypeError):
+-        try:
+-            get_hint = type(obj).__length_hint__
+-        except AttributeError:
+-            return None
+-        try:
+-            hint = get_hint(obj)
+-        except TypeError:
+-            return None
+-        if hint is NotImplemented or not isinstance(hint, int) or hint < 0:
+-            return None
+-        return hint
+-
+-
+-class ProgressBar:
+-    def __init__(
+-        self,
+-        iterable,
+-        length=None,
+-        fill_char="#",
+-        empty_char=" ",
+-        bar_template="%(bar)s",
+-        info_sep="  ",
+-        show_eta=True,
+-        show_percent=None,
+-        show_pos=False,
+-        item_show_func=None,
+-        label=None,
+-        file=None,
+-        color=None,
+-        width=30,
+-    ):
+-        self.fill_char = fill_char
+-        self.empty_char = empty_char
+-        self.bar_template = bar_template
+-        self.info_sep = info_sep
+-        self.show_eta = show_eta
+-        self.show_percent = show_percent
+-        self.show_pos = show_pos
+-        self.item_show_func = item_show_func
+-        self.label = label or ""
+-        if file is None:
+-            file = _default_text_stdout()
+-        self.file = file
+-        self.color = color
+-        self.width = width
+-        self.autowidth = width == 0
+-
+-        if length is None:
+-            length = _length_hint(iterable)
+-        if iterable is None:
+-            if length is None:
+-                raise TypeError("iterable or length is required")
+-            iterable = range(length)
+-        self.iter = iter(iterable)
+-        self.length = length
+-        self.length_known = length is not None
+-        self.pos = 0
+-        self.avg = []
+-        self.start = self.last_eta = time.time()
+-        self.eta_known = False
+-        self.finished = False
+-        self.max_width = None
+-        self.entered = False
+-        self.current_item = None
+-        self.is_hidden = not isatty(self.file)
+-        self._last_line = None
+-        self.short_limit = 0.5
+-
+-    def __enter__(self):
+-        self.entered = True
+-        self.render_progress()
+-        return self
+-
+-    def __exit__(self, exc_type, exc_value, tb):
+-        self.render_finish()
+-
+-    def __iter__(self):
+-        if not self.entered:
+-            raise RuntimeError("You need to use progress bars in a with block.")
+-        self.render_progress()
+-        return self.generator()
+-
+-    def __next__(self):
+-        # Iteration is defined in terms of a generator function,
+-        # returned by iter(self); use that to define next(). This works
+-        # because `self.iter` is an iterable consumed by that generator,
+-        # so it is re-entry safe. Calling `next(self.generator())`
+-        # twice works and does "what you want".
+-        return next(iter(self))
+-
+-    def is_fast(self):
+-        return time.time() - self.start <= self.short_limit
+-
+-    def render_finish(self):
+-        if self.is_hidden or self.is_fast():
+-            return
+-        self.file.write(AFTER_BAR)
+-        self.file.flush()
+-
+-    @property
+-    def pct(self):
+-        if self.finished:
+-            return 1.0
+-        return min(self.pos / (float(self.length) or 1), 1.0)
+-
+-    @property
+-    def time_per_iteration(self):
+-        if not self.avg:
+-            return 0.0
+-        return sum(self.avg) / float(len(self.avg))
+-
+-    @property
+-    def eta(self):
+-        if self.length_known and not self.finished:
+-            return self.time_per_iteration * (self.length - self.pos)
+-        return 0.0
+-
+-    def format_eta(self):
+-        if self.eta_known:
+-            t = int(self.eta)
+-            seconds = t % 60
+-            t //= 60
+-            minutes = t % 60
+-            t //= 60
+-            hours = t % 24
+-            t //= 24
+-            if t > 0:
+-                return f"{t}d {hours:02}:{minutes:02}:{seconds:02}"
+-            else:
+-                return f"{hours:02}:{minutes:02}:{seconds:02}"
+-        return ""
+-
+-    def format_pos(self):
+-        pos = str(self.pos)
+-        if self.length_known:
+-            pos += f"/{self.length}"
+-        return pos
+-
+-    def format_pct(self):
+-        return f"{int(self.pct * 100): 4}%"[1:]
+-
+-    def format_bar(self):
+-        if self.length_known:
+-            bar_length = int(self.pct * self.width)
+-            bar = self.fill_char * bar_length
+-            bar += self.empty_char * (self.width - bar_length)
+-        elif self.finished:
+-            bar = self.fill_char * self.width
+-        else:
+-            bar = list(self.empty_char * (self.width or 1))
+-            if self.time_per_iteration != 0:
+-                bar[
+-                    int(
+-                        (math.cos(self.pos * self.time_per_iteration) / 2.0 + 0.5)
+-                        * self.width
+-                    )
+-                ] = self.fill_char
+-            bar = "".join(bar)
+-        return bar
+-
+-    def format_progress_line(self):
+-        show_percent = self.show_percent
+-
+-        info_bits = []
+-        if self.length_known and show_percent is None:
+-            show_percent = not self.show_pos
+-
+-        if self.show_pos:
+-            info_bits.append(self.format_pos())
+-        if show_percent:
+-            info_bits.append(self.format_pct())
+-        if self.show_eta and self.eta_known and not self.finished:
+-            info_bits.append(self.format_eta())
+-        if self.item_show_func is not None:
+-            item_info = self.item_show_func(self.current_item)
+-            if item_info is not None:
+-                info_bits.append(item_info)
+-
+-        return (
+-            self.bar_template
+-            % {
+-                "label": self.label,
+-                "bar": self.format_bar(),
+-                "info": self.info_sep.join(info_bits),
+-            }
+-        ).rstrip()
+-
+-    def render_progress(self):
+-        from .termui import get_terminal_size
+-
+-        if self.is_hidden:
+-            return
+-
+-        buf = []
+-        # Update width in case the terminal has been resized
+-        if self.autowidth:
+-            old_width = self.width
+-            self.width = 0
+-            clutter_length = term_len(self.format_progress_line())
+-            new_width = max(0, get_terminal_size()[0] - clutter_length)
+-            if new_width < old_width:
+-                buf.append(BEFORE_BAR)
+-                buf.append(" " * self.max_width)
+-                self.max_width = new_width
+-            self.width = new_width
+-
+-        clear_width = self.width
+-        if self.max_width is not None:
+-            clear_width = self.max_width
+-
+-        buf.append(BEFORE_BAR)
+-        line = self.format_progress_line()
+-        line_len = term_len(line)
+-        if self.max_width is None or self.max_width < line_len:
+-            self.max_width = line_len
+-
+-        buf.append(line)
+-        buf.append(" " * (clear_width - line_len))
+-        line = "".join(buf)
+-        # Render the line only if it changed.
+-
+-        if line != self._last_line and not self.is_fast():
+-            self._last_line = line
+-            echo(line, file=self.file, color=self.color, nl=False)
+-            self.file.flush()
+-
+-    def make_step(self, n_steps):
+-        self.pos += n_steps
+-        if self.length_known and self.pos >= self.length:
+-            self.finished = True
+-
+-        if (time.time() - self.last_eta) < 1.0:
+-            return
+-
+-        self.last_eta = time.time()
+-
+-        # self.avg is a rolling list of length <= 7 of steps where steps are
+-        # defined as time elapsed divided by the total progress through
+-        # self.length.
+-        if self.pos:
+-            step = (time.time() - self.start) / self.pos
+-        else:
+-            step = time.time() - self.start
+-
+-        self.avg = self.avg[-6:] + [step]
+-
+-        self.eta_known = self.length_known
+-
+-    def update(self, n_steps, current_item=None):
+-        """Update the progress bar by advancing a specified number of
+-        steps, and optionally set the ``current_item`` for this new
+-        position.
+-
+-        :param n_steps: Number of steps to advance.
+-        :param current_item: Optional item to set as ``current_item``
+-            for the updated position.
+-
+-        .. versionadded:: 8.0
+-            Added the ``current_item`` optional parameter.
+-        """
+-        self.make_step(n_steps)
+-        if current_item is not None:
+-            self.current_item = current_item
+-        self.render_progress()
+-
+-    def finish(self):
+-        self.eta_known = 0
+-        self.current_item = None
+-        self.finished = True
+-
+-    def generator(self):
+-        """Return a generator which yields the items added to the bar
+-        during construction, and updates the progress bar *after* the
+-        yielded block returns.
+-        """
+-        # WARNING: the iterator interface for `ProgressBar` relies on
+-        # this and only works because this is a simple generator which
+-        # doesn't create or manage additional state. If this function
+-        # changes, the impact should be evaluated both against
+-        # `iter(bar)` and `next(bar)`. `next()` in particular may call
+-        # `self.generator()` repeatedly, and this must remain safe in
+-        # order for that interface to work.
+-        if not self.entered:
+-            raise RuntimeError("You need to use progress bars in a with block.")
+-
+-        if self.is_hidden:
+-            yield from self.iter
+-        else:
+-            for rv in self.iter:
+-                self.current_item = rv
+-                yield rv
+-                self.update(1)
+-            self.finish()
+-            self.render_progress()
+-
+-
+-def pager(generator, color=None):
+-    """Decide what method to use for paging through text."""
+-    stdout = _default_text_stdout()
+-    if not isatty(sys.stdin) or not isatty(stdout):
+-        return _nullpager(stdout, generator, color)
+-    pager_cmd = (os.environ.get("PAGER", None) or "").strip()
+-    if pager_cmd:
+-        if WIN:
+-            return _tempfilepager(generator, pager_cmd, color)
+-        return _pipepager(generator, pager_cmd, color)
+-    if os.environ.get("TERM") in ("dumb", "emacs"):
+-        return _nullpager(stdout, generator, color)
+-    if WIN or sys.platform.startswith("os2"):
+-        return _tempfilepager(generator, "more <", color)
+-    if hasattr(os, "system") and os.system("(less) 2>/dev/null") == 0:
+-        return _pipepager(generator, "less", color)
+-
+-    import tempfile
+-
+-    fd, filename = tempfile.mkstemp()
+-    os.close(fd)
+-    try:
+-        if hasattr(os, "system") and os.system(f'more "{filename}"') == 0:
+-            return _pipepager(generator, "more", color)
+-        return _nullpager(stdout, generator, color)
+-    finally:
+-        os.unlink(filename)
+-
+-
+-def _pipepager(generator, cmd, color):
+-    """Page through text by feeding it to another program.  Invoking a
+-    pager through this might support colors.
+-    """
+-    import subprocess
+-
+-    env = dict(os.environ)
+-
+-    # If we're piping to less we might support colors under the
+-    # condition that
+-    cmd_detail = cmd.rsplit("/", 1)[-1].split()
+-    if color is None and cmd_detail[0] == "less":
+-        less_flags = f"{os.environ.get('LESS', '')}{' '.join(cmd_detail[1:])}"
+-        if not less_flags:
+-            env["LESS"] = "-R"
+-            color = True
+-        elif "r" in less_flags or "R" in less_flags:
+-            color = True
+-
+-    c = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, env=env)
+-    encoding = get_best_encoding(c.stdin)
+-    try:
+-        for text in generator:
+-            if not color:
+-                text = strip_ansi(text)
+-
+-            c.stdin.write(text.encode(encoding, "replace"))
+-    except (OSError, KeyboardInterrupt):
+-        pass
+-    else:
+-        c.stdin.close()
+-
+-    # Less doesn't respect ^C, but catches it for its own UI purposes (aborting
+-    # search or other commands inside less).
+-    #
+-    # That means when the user hits ^C, the parent process (click) terminates,
+-    # but less is still alive, paging the output and messing up the terminal.
+-    #
+-    # If the user wants to make the pager exit on ^C, they should set
+-    # `LESS='-K'`. It's not our decision to make.
+-    while True:
+-        try:
+-            c.wait()
+-        except KeyboardInterrupt:
+-            pass
+-        else:
+-            break
+-
+-
+-def _tempfilepager(generator, cmd, color):
+-    """Page through text by invoking a program on a temporary file."""
+-    import tempfile
+-
+-    filename = tempfile.mktemp()
+-    # TODO: This never terminates if the passed generator never terminates.
+-    text = "".join(generator)
+-    if not color:
+-        text = strip_ansi(text)
+-    encoding = get_best_encoding(sys.stdout)
+-    with open_stream(filename, "wb")[0] as f:
+-        f.write(text.encode(encoding))
+-    try:
+-        os.system(f'{cmd} "{filename}"')
+-    finally:
+-        os.unlink(filename)
+-
+-
+-def _nullpager(stream, generator, color):
+-    """Simply print unformatted text.  This is the ultimate fallback."""
+-    for text in generator:
+-        if not color:
+-            text = strip_ansi(text)
+-        stream.write(text)
+-
+-
+-class Editor:
+-    def __init__(self, editor=None, env=None, require_save=True, extension=".txt"):
+-        self.editor = editor
+-        self.env = env
+-        self.require_save = require_save
+-        self.extension = extension
+-
+-    def get_editor(self):
+-        if self.editor is not None:
+-            return self.editor
+-        for key in "VISUAL", "EDITOR":
+-            rv = os.environ.get(key)
+-            if rv:
+-                return rv
+-        if WIN:
+-            return "notepad"
+-        for editor in "sensible-editor", "vim", "nano":
+-            if os.system(f"which {editor} >/dev/null 2>&1") == 0:
+-                return editor
+-        return "vi"
+-
+-    def edit_file(self, filename):
+-        import subprocess
+-
+-        editor = self.get_editor()
+-        if self.env:
+-            environ = os.environ.copy()
+-            environ.update(self.env)
+-        else:
+-            environ = None
+-        try:
+-            c = subprocess.Popen(f'{editor} "{filename}"', env=environ, shell=True)
+-            exit_code = c.wait()
+-            if exit_code != 0:
+-                raise ClickException(f"{editor}: Editing failed!")
+-        except OSError as e:
+-            raise ClickException(f"{editor}: Editing failed: {e}")
+-
+-    def edit(self, text):
+-        import tempfile
+-
+-        text = text or ""
+-        binary_data = type(text) in [bytes, bytearray]
+-
+-        if not binary_data and text and not text.endswith("\n"):
+-            text += "\n"
+-
+-        fd, name = tempfile.mkstemp(prefix="editor-", suffix=self.extension)
+-        try:
+-            if not binary_data:
+-                if WIN:
+-                    encoding = "utf-8-sig"
+-                    text = text.replace("\n", "\r\n")
+-                else:
+-                    encoding = "utf-8"
+-                text = text.encode(encoding)
+-
+-            f = os.fdopen(fd, "wb")
+-            f.write(text)
+-            f.close()
+-            timestamp = os.path.getmtime(name)
+-
+-            self.edit_file(name)
+-
+-            if self.require_save and os.path.getmtime(name) == timestamp:
+-                return None
+-
+-            f = open(name, "rb")
+-            try:
+-                rv = f.read()
+-            finally:
+-                f.close()
+-            if binary_data:
+-                return rv
+-            else:
+-                return rv.decode("utf-8-sig").replace("\r\n", "\n")
+-        finally:
+-            os.unlink(name)
+-
+-
+-def open_url(url, wait=False, locate=False):
+-    import subprocess
+-
+-    def _unquote_file(url):
+-        import urllib
+-
+-        if url.startswith("file://"):
+-            url = urllib.unquote(url[7:])
+-        return url
+-
+-    if sys.platform == "darwin":
+-        args = ["open"]
+-        if wait:
+-            args.append("-W")
+-        if locate:
+-            args.append("-R")
+-        args.append(_unquote_file(url))
+-        null = open("/dev/null", "w")
+-        try:
+-            return subprocess.Popen(args, stderr=null).wait()
+-        finally:
+-            null.close()
+-    elif WIN:
+-        if locate:
+-            url = _unquote_file(url.replace('"', ""))
+-            args = f'explorer /select,"{url}"'
+-        else:
+-            url = url.replace('"', "")
+-            wait = "/WAIT" if wait else ""
+-            args = f'start {wait} "" "{url}"'
+-        return os.system(args)
+-    elif CYGWIN:
+-        if locate:
+-            url = os.path.dirname(_unquote_file(url).replace('"', ""))
+-            args = f'cygstart "{url}"'
+-        else:
+-            url = url.replace('"', "")
+-            wait = "-w" if wait else ""
+-            args = f'cygstart {wait} "{url}"'
+-        return os.system(args)
+-
+-    try:
+-        if locate:
+-            url = os.path.dirname(_unquote_file(url)) or "."
+-        else:
+-            url = _unquote_file(url)
+-        c = subprocess.Popen(["xdg-open", url])
+-        if wait:
+-            return c.wait()
+-        return 0
+-    except OSError:
+-        if url.startswith(("http://", "https://")) and not locate and not wait:
+-            import webbrowser
+-
+-            webbrowser.open(url)
+-            return 0
+-        return 1
+-
+-
+-def _translate_ch_to_exc(ch):
+-    if ch == "\x03":
+-        raise KeyboardInterrupt()
+-    if ch == "\x04" and not WIN:  # Unix-like, Ctrl+D
+-        raise EOFError()
+-    if ch == "\x1a" and WIN:  # Windows, Ctrl+Z
+-        raise EOFError()
+-
+-
+-if WIN:
+-    import msvcrt
+-
+-    @contextlib.contextmanager
+-    def raw_terminal():
+-        yield
+-
+-    def getchar(echo):
+-        # The function `getch` will return a bytes object corresponding to
+-        # the pressed character. Since Windows 10 build 1803, it will also
+-        # return \x00 when called a second time after pressing a regular key.
+-        #
+-        # `getwch` does not share this probably-bugged behavior. Moreover, it
+-        # returns a Unicode object by default, which is what we want.
+-        #
+-        # Either of these functions will return \x00 or \xe0 to indicate
+-        # a special key, and you need to call the same function again to get
+-        # the "rest" of the code. The fun part is that \u00e0 is
+-        # "latin small letter a with grave", so if you type that on a French
+-        # keyboard, you _also_ get a \xe0.
+-        # E.g., consider the Up arrow. This returns \xe0 and then \x48. The
+-        # resulting Unicode string reads as "a with grave" + "capital H".
+-        # This is indistinguishable from when the user actually types
+-        # "a with grave" and then "capital H".
+-        #
+-        # When \xe0 is returned, we assume it's part of a special-key sequence
+-        # and call `getwch` again, but that means that when the user types
+-        # the \u00e0 character, `getchar` doesn't return until a second
+-        # character is typed.
+-        # The alternative is returning immediately, but that would mess up
+-        # cross-platform handling of arrow keys and others that start with
+-        # \xe0. Another option is using `getch`, but then we can't reliably
+-        # read non-ASCII characters, because return values of `getch` are
+-        # limited to the current 8-bit codepage.
+-        #
+-        # Anyway, Click doesn't claim to do this Right(tm), and using `getwch`
+-        # is doing the right thing in more situations than with `getch`.
+-        if echo:
+-            func = msvcrt.getwche
+-        else:
+-            func = msvcrt.getwch
+-
+-        rv = func()
+-        if rv in ("\x00", "\xe0"):
+-            # \x00 and \xe0 are control characters that indicate special key,
+-            # see above.
+-            rv += func()
+-        _translate_ch_to_exc(rv)
+-        return rv
+-
+-
+-else:
+-    import tty
+-    import termios
+-
+-    @contextlib.contextmanager
+-    def raw_terminal():
+-        if not isatty(sys.stdin):
+-            f = open("/dev/tty")
+-            fd = f.fileno()
+-        else:
+-            fd = sys.stdin.fileno()
+-            f = None
+-        try:
+-            old_settings = termios.tcgetattr(fd)
+-            try:
+-                tty.setraw(fd)
+-                yield fd
+-            finally:
+-                termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
+-                sys.stdout.flush()
+-                if f is not None:
+-                    f.close()
+-        except termios.error:
+-            pass
+-
+-    def getchar(echo):
+-        with raw_terminal() as fd:
+-            ch = os.read(fd, 32)
+-            ch = ch.decode(get_best_encoding(sys.stdin), "replace")
+-            if echo and isatty(sys.stdout):
+-                sys.stdout.write(ch)
+-            _translate_ch_to_exc(ch)
+-            return ch
+diff --git a/dynaconf/vendor_src/click/_textwrap.py b/dynaconf/vendor_src/click/_textwrap.py
+deleted file mode 100644
+index 7a052b7..0000000
+--- a/dynaconf/vendor_src/click/_textwrap.py
++++ /dev/null
+@@ -1,37 +0,0 @@
+-import textwrap
+-from contextlib import contextmanager
+-
+-
+-class TextWrapper(textwrap.TextWrapper):
+-    def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
+-        space_left = max(width - cur_len, 1)
+-
+-        if self.break_long_words:
+-            last = reversed_chunks[-1]
+-            cut = last[:space_left]
+-            res = last[space_left:]
+-            cur_line.append(cut)
+-            reversed_chunks[-1] = res
+-        elif not cur_line:
+-            cur_line.append(reversed_chunks.pop())
+-
+-    @contextmanager
+-    def extra_indent(self, indent):
+-        old_initial_indent = self.initial_indent
+-        old_subsequent_indent = self.subsequent_indent
+-        self.initial_indent += indent
+-        self.subsequent_indent += indent
+-        try:
+-            yield
+-        finally:
+-            self.initial_indent = old_initial_indent
+-            self.subsequent_indent = old_subsequent_indent
+-
+-    def indent_only(self, text):
+-        rv = []
+-        for idx, line in enumerate(text.splitlines()):
+-            indent = self.initial_indent
+-            if idx > 0:
+-                indent = self.subsequent_indent
+-            rv.append(f"{indent}{line}")
+-        return "\n".join(rv)
+diff --git a/dynaconf/vendor_src/click/_unicodefun.py b/dynaconf/vendor_src/click/_unicodefun.py
+deleted file mode 100644
+index 53ec9d2..0000000
+--- a/dynaconf/vendor_src/click/_unicodefun.py
++++ /dev/null
+@@ -1,82 +0,0 @@
+-import codecs
+-import os
+-
+-
+-def _verify_python_env():
+-    """Ensures that the environment is good for Unicode."""
+-    try:
+-        import locale
+-
+-        fs_enc = codecs.lookup(locale.getpreferredencoding()).name
+-    except Exception:
+-        fs_enc = "ascii"
+-    if fs_enc != "ascii":
+-        return
+-
+-    extra = ""
+-    if os.name == "posix":
+-        import subprocess
+-
+-        try:
+-            rv = subprocess.Popen(
+-                ["locale", "-a"], stdout=subprocess.PIPE, stderr=subprocess.PIPE
+-            ).communicate()[0]
+-        except OSError:
+-            rv = b""
+-        good_locales = set()
+-        has_c_utf8 = False
+-
+-        # Make sure we're operating on text here.
+-        if isinstance(rv, bytes):
+-            rv = rv.decode("ascii", "replace")
+-
+-        for line in rv.splitlines():
+-            locale = line.strip()
+-            if locale.lower().endswith((".utf-8", ".utf8")):
+-                good_locales.add(locale)
+-                if locale.lower() in ("c.utf8", "c.utf-8"):
+-                    has_c_utf8 = True
+-
+-        extra += "\n\n"
+-        if not good_locales:
+-            extra += (
+-                "Additional information: on this system no suitable"
+-                " UTF-8 locales were discovered. This most likely"
+-                " requires resolving by reconfiguring the locale"
+-                " system."
+-            )
+-        elif has_c_utf8:
+-            extra += (
+-                "This system supports the C.UTF-8 locale which is"
+-                " recommended. You might be able to resolve your issue"
+-                " by exporting the following environment variables:\n\n"
+-                "    export LC_ALL=C.UTF-8\n"
+-                "    export LANG=C.UTF-8"
+-            )
+-        else:
+-            extra += (
+-                "This system lists some UTF-8 supporting locales that"
+-                " you can pick from. The following suitable locales"
+-                f" were discovered: {', '.join(sorted(good_locales))}"
+-            )
+-
+-        bad_locale = None
+-        for locale in os.environ.get("LC_ALL"), os.environ.get("LANG"):
+-            if locale and locale.lower().endswith((".utf-8", ".utf8")):
+-                bad_locale = locale
+-            if locale is not None:
+-                break
+-        if bad_locale is not None:
+-            extra += (
+-                "\n\nClick discovered that you exported a UTF-8 locale"
+-                " but the locale system could not pick up from it"
+-                " because it does not exist. The exported locale is"
+-                f" {bad_locale!r} but it is not supported"
+-            )
+-
+-    raise RuntimeError(
+-        "Click will abort further execution because Python was"
+-        " configured to use ASCII as encoding for the environment."
+-        " Consult https://click.palletsprojects.com/unicode-support/"
+-        f" for mitigation steps.{extra}"
+-    )
+diff --git a/dynaconf/vendor_src/click/_winconsole.py b/dynaconf/vendor_src/click/_winconsole.py
+deleted file mode 100644
+index 923fdba..0000000
+--- a/dynaconf/vendor_src/click/_winconsole.py
++++ /dev/null
+@@ -1,308 +0,0 @@
+-# This module is based on the excellent work by Adam Bartoš who
+-# provided a lot of what went into the implementation here in
+-# the discussion to issue1602 in the Python bug tracker.
+-#
+-# There are some general differences in regards to how this works
+-# compared to the original patches as we do not need to patch
+-# the entire interpreter but just work in our little world of
+-# echo and prompt.
+-import ctypes
+-import io
+-import time
+-from ctypes import byref
+-from ctypes import c_char
+-from ctypes import c_char_p
+-from ctypes import c_int
+-from ctypes import c_ssize_t
+-from ctypes import c_ulong
+-from ctypes import c_void_p
+-from ctypes import POINTER
+-from ctypes import py_object
+-from ctypes import windll
+-from ctypes import WINFUNCTYPE
+-from ctypes.wintypes import DWORD
+-from ctypes.wintypes import HANDLE
+-from ctypes.wintypes import LPCWSTR
+-from ctypes.wintypes import LPWSTR
+-
+-import msvcrt
+-
+-from ._compat import _NonClosingTextIOWrapper
+-
+-try:
+-    from ctypes import pythonapi
+-except ImportError:
+-    pythonapi = None
+-else:
+-    PyObject_GetBuffer = pythonapi.PyObject_GetBuffer
+-    PyBuffer_Release = pythonapi.PyBuffer_Release
+-
+-
+-c_ssize_p = POINTER(c_ssize_t)
+-
+-kernel32 = windll.kernel32
+-GetStdHandle = kernel32.GetStdHandle
+-ReadConsoleW = kernel32.ReadConsoleW
+-WriteConsoleW = kernel32.WriteConsoleW
+-GetConsoleMode = kernel32.GetConsoleMode
+-GetLastError = kernel32.GetLastError
+-GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32))
+-CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))(
+-    ("CommandLineToArgvW", windll.shell32)
+-)
+-LocalFree = WINFUNCTYPE(ctypes.c_void_p, ctypes.c_void_p)(
+-    ("LocalFree", windll.kernel32)
+-)
+-
+-
+-STDIN_HANDLE = GetStdHandle(-10)
+-STDOUT_HANDLE = GetStdHandle(-11)
+-STDERR_HANDLE = GetStdHandle(-12)
+-
+-
+-PyBUF_SIMPLE = 0
+-PyBUF_WRITABLE = 1
+-
+-ERROR_SUCCESS = 0
+-ERROR_NOT_ENOUGH_MEMORY = 8
+-ERROR_OPERATION_ABORTED = 995
+-
+-STDIN_FILENO = 0
+-STDOUT_FILENO = 1
+-STDERR_FILENO = 2
+-
+-EOF = b"\x1a"
+-MAX_BYTES_WRITTEN = 32767
+-
+-
+-class Py_buffer(ctypes.Structure):
+-    _fields_ = [
+-        ("buf", c_void_p),
+-        ("obj", py_object),
+-        ("len", c_ssize_t),
+-        ("itemsize", c_ssize_t),
+-        ("readonly", c_int),
+-        ("ndim", c_int),
+-        ("format", c_char_p),
+-        ("shape", c_ssize_p),
+-        ("strides", c_ssize_p),
+-        ("suboffsets", c_ssize_p),
+-        ("internal", c_void_p),
+-    ]
+-
+-
+-# On PyPy we cannot get buffers so our ability to operate here is
+-# severely limited.
+-if pythonapi is None:
+-    get_buffer = None
+-else:
+-
+-    def get_buffer(obj, writable=False):
+-        buf = Py_buffer()
+-        flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE
+-        PyObject_GetBuffer(py_object(obj), byref(buf), flags)
+-        try:
+-            buffer_type = c_char * buf.len
+-            return buffer_type.from_address(buf.buf)
+-        finally:
+-            PyBuffer_Release(byref(buf))
+-
+-
+-class _WindowsConsoleRawIOBase(io.RawIOBase):
+-    def __init__(self, handle):
+-        self.handle = handle
+-
+-    def isatty(self):
+-        io.RawIOBase.isatty(self)
+-        return True
+-
+-
+-class _WindowsConsoleReader(_WindowsConsoleRawIOBase):
+-    def readable(self):
+-        return True
+-
+-    def readinto(self, b):
+-        bytes_to_be_read = len(b)
+-        if not bytes_to_be_read:
+-            return 0
+-        elif bytes_to_be_read % 2:
+-            raise ValueError(
+-                "cannot read odd number of bytes from UTF-16-LE encoded console"
+-            )
+-
+-        buffer = get_buffer(b, writable=True)
+-        code_units_to_be_read = bytes_to_be_read // 2
+-        code_units_read = c_ulong()
+-
+-        rv = ReadConsoleW(
+-            HANDLE(self.handle),
+-            buffer,
+-            code_units_to_be_read,
+-            byref(code_units_read),
+-            None,
+-        )
+-        if GetLastError() == ERROR_OPERATION_ABORTED:
+-            # wait for KeyboardInterrupt
+-            time.sleep(0.1)
+-        if not rv:
+-            raise OSError(f"Windows error: {GetLastError()}")
+-
+-        if buffer[0] == EOF:
+-            return 0
+-        return 2 * code_units_read.value
+-
+-
+-class _WindowsConsoleWriter(_WindowsConsoleRawIOBase):
+-    def writable(self):
+-        return True
+-
+-    @staticmethod
+-    def _get_error_message(errno):
+-        if errno == ERROR_SUCCESS:
+-            return "ERROR_SUCCESS"
+-        elif errno == ERROR_NOT_ENOUGH_MEMORY:
+-            return "ERROR_NOT_ENOUGH_MEMORY"
+-        return f"Windows error {errno}"
+-
+-    def write(self, b):
+-        bytes_to_be_written = len(b)
+-        buf = get_buffer(b)
+-        code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2
+-        code_units_written = c_ulong()
+-
+-        WriteConsoleW(
+-            HANDLE(self.handle),
+-            buf,
+-            code_units_to_be_written,
+-            byref(code_units_written),
+-            None,
+-        )
+-        bytes_written = 2 * code_units_written.value
+-
+-        if bytes_written == 0 and bytes_to_be_written > 0:
+-            raise OSError(self._get_error_message(GetLastError()))
+-        return bytes_written
+-
+-
+-class ConsoleStream:
+-    def __init__(self, text_stream, byte_stream):
+-        self._text_stream = text_stream
+-        self.buffer = byte_stream
+-
+-    @property
+-    def name(self):
+-        return self.buffer.name
+-
+-    def write(self, x):
+-        if isinstance(x, str):
+-            return self._text_stream.write(x)
+-        try:
+-            self.flush()
+-        except Exception:
+-            pass
+-        return self.buffer.write(x)
+-
+-    def writelines(self, lines):
+-        for line in lines:
+-            self.write(line)
+-
+-    def __getattr__(self, name):
+-        return getattr(self._text_stream, name)
+-
+-    def isatty(self):
+-        return self.buffer.isatty()
+-
+-    def __repr__(self):
+-        return f"<ConsoleStream name={self.name!r} encoding={self.encoding!r}>"
+-
+-
+-class WindowsChunkedWriter:
+-    """
+-    Wraps a stream (such as stdout), acting as a transparent proxy for all
+-    attribute access apart from method 'write()' which we wrap to write in
+-    limited chunks due to a Windows limitation on binary console streams.
+-    """
+-
+-    def __init__(self, wrapped):
+-        # double-underscore everything to prevent clashes with names of
+-        # attributes on the wrapped stream object.
+-        self.__wrapped = wrapped
+-
+-    def __getattr__(self, name):
+-        return getattr(self.__wrapped, name)
+-
+-    def write(self, text):
+-        total_to_write = len(text)
+-        written = 0
+-
+-        while written < total_to_write:
+-            to_write = min(total_to_write - written, MAX_BYTES_WRITTEN)
+-            self.__wrapped.write(text[written : written + to_write])
+-            written += to_write
+-
+-
+-def _get_text_stdin(buffer_stream):
+-    text_stream = _NonClosingTextIOWrapper(
+-        io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)),
+-        "utf-16-le",
+-        "strict",
+-        line_buffering=True,
+-    )
+-    return ConsoleStream(text_stream, buffer_stream)
+-
+-
+-def _get_text_stdout(buffer_stream):
+-    text_stream = _NonClosingTextIOWrapper(
+-        io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)),
+-        "utf-16-le",
+-        "strict",
+-        line_buffering=True,
+-    )
+-    return ConsoleStream(text_stream, buffer_stream)
+-
+-
+-def _get_text_stderr(buffer_stream):
+-    text_stream = _NonClosingTextIOWrapper(
+-        io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)),
+-        "utf-16-le",
+-        "strict",
+-        line_buffering=True,
+-    )
+-    return ConsoleStream(text_stream, buffer_stream)
+-
+-
+-_stream_factories = {
+-    0: _get_text_stdin,
+-    1: _get_text_stdout,
+-    2: _get_text_stderr,
+-}
+-
+-
+-def _is_console(f):
+-    if not hasattr(f, "fileno"):
+-        return False
+-
+-    try:
+-        fileno = f.fileno()
+-    except OSError:
+-        return False
+-
+-    handle = msvcrt.get_osfhandle(fileno)
+-    return bool(GetConsoleMode(handle, byref(DWORD())))
+-
+-
+-def _get_windows_console_stream(f, encoding, errors):
+-    if (
+-        get_buffer is not None
+-        and encoding in {"utf-16-le", None}
+-        and errors in {"strict", None}
+-        and _is_console(f)
+-    ):
+-        func = _stream_factories.get(f.fileno())
+-        if func is not None:
+-            f = getattr(f, "buffer", None)
+-
+-            if f is None:
+-                return None
+-
+-            return func(f)
+diff --git a/dynaconf/vendor_src/click/core.py b/dynaconf/vendor_src/click/core.py
+deleted file mode 100644
+index b7124df..0000000
+--- a/dynaconf/vendor_src/click/core.py
++++ /dev/null
+@@ -1,2070 +0,0 @@
+-import errno
+-import inspect
+-import os
+-import sys
+-from contextlib import contextmanager
+-from functools import update_wrapper
+-from itertools import repeat
+-
+-from ._unicodefun import _verify_python_env
+-from .exceptions import Abort
+-from .exceptions import BadParameter
+-from .exceptions import ClickException
+-from .exceptions import Exit
+-from .exceptions import MissingParameter
+-from .exceptions import UsageError
+-from .formatting import HelpFormatter
+-from .formatting import join_options
+-from .globals import pop_context
+-from .globals import push_context
+-from .parser import OptionParser
+-from .parser import split_opt
+-from .termui import confirm
+-from .termui import prompt
+-from .termui import style
+-from .types import BOOL
+-from .types import convert_type
+-from .types import IntRange
+-from .utils import echo
+-from .utils import make_default_short_help
+-from .utils import make_str
+-from .utils import PacifyFlushWrapper
+-
+-_missing = object()
+-
+-SUBCOMMAND_METAVAR = "COMMAND [ARGS]..."
+-SUBCOMMANDS_METAVAR = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..."
+-
+-DEPRECATED_HELP_NOTICE = " (DEPRECATED)"
+-DEPRECATED_INVOKE_NOTICE = "DeprecationWarning: The command {name} is deprecated."
+-
+-
+-def _maybe_show_deprecated_notice(cmd):
+-    if cmd.deprecated:
+-        echo(style(DEPRECATED_INVOKE_NOTICE.format(name=cmd.name), fg="red"), err=True)
+-
+-
+-def fast_exit(code):
+-    """Exit without garbage collection, this speeds up exit by about 10ms for
+-    things like bash completion.
+-    """
+-    sys.stdout.flush()
+-    sys.stderr.flush()
+-    os._exit(code)
+-
+-
+-def _bashcomplete(cmd, prog_name, complete_var=None):
+-    """Internal handler for the bash completion support."""
+-    if complete_var is None:
+-        complete_var = f"_{prog_name}_COMPLETE".replace("-", "_").upper()
+-    complete_instr = os.environ.get(complete_var)
+-    if not complete_instr:
+-        return
+-
+-    from ._bashcomplete import bashcomplete
+-
+-    if bashcomplete(cmd, prog_name, complete_var, complete_instr):
+-        fast_exit(1)
+-
+-
+-def _check_multicommand(base_command, cmd_name, cmd, register=False):
+-    if not base_command.chain or not isinstance(cmd, MultiCommand):
+-        return
+-    if register:
+-        hint = (
+-            "It is not possible to add multi commands as children to"
+-            " another multi command that is in chain mode."
+-        )
+-    else:
+-        hint = (
+-            "Found a multi command as subcommand to a multi command"
+-            " that is in chain mode. This is not supported."
+-        )
+-    raise RuntimeError(
+-        f"{hint}. Command {base_command.name!r} is set to chain and"
+-        f" {cmd_name!r} was added as a subcommand but it in itself is a"
+-        f" multi command. ({cmd_name!r} is a {type(cmd).__name__}"
+-        f" within a chained {type(base_command).__name__} named"
+-        f" {base_command.name!r})."
+-    )
+-
+-
+-def batch(iterable, batch_size):
+-    return list(zip(*repeat(iter(iterable), batch_size)))
+-
+-
+-@contextmanager
+-def augment_usage_errors(ctx, param=None):
+-    """Context manager that attaches extra information to exceptions."""
+-    try:
+-        yield
+-    except BadParameter as e:
+-        if e.ctx is None:
+-            e.ctx = ctx
+-        if param is not None and e.param is None:
+-            e.param = param
+-        raise
+-    except UsageError as e:
+-        if e.ctx is None:
+-            e.ctx = ctx
+-        raise
+-
+-
+-def iter_params_for_processing(invocation_order, declaration_order):
+-    """Given a sequence of parameters in the order as should be considered
+-    for processing and an iterable of parameters that exist, this returns
+-    a list in the correct order as they should be processed.
+-    """
+-
+-    def sort_key(item):
+-        try:
+-            idx = invocation_order.index(item)
+-        except ValueError:
+-            idx = float("inf")
+-        return (not item.is_eager, idx)
+-
+-    return sorted(declaration_order, key=sort_key)
+-
+-
+-class ParameterSource:
+-    """This is an enum that indicates the source of a command line parameter.
+-
+-    The enum has one of the following values: COMMANDLINE,
+-    ENVIRONMENT, DEFAULT, DEFAULT_MAP.  The DEFAULT indicates that the
+-    default value in the decorator was used.  This class should be
+-    converted to an enum when Python 2 support is dropped.
+-    """
+-
+-    COMMANDLINE = "COMMANDLINE"
+-    ENVIRONMENT = "ENVIRONMENT"
+-    DEFAULT = "DEFAULT"
+-    DEFAULT_MAP = "DEFAULT_MAP"
+-
+-    VALUES = {COMMANDLINE, ENVIRONMENT, DEFAULT, DEFAULT_MAP}
+-
+-    @classmethod
+-    def validate(cls, value):
+-        """Validate that the specified value is a valid enum.
+-
+-        This method will raise a ValueError if the value is
+-        not a valid enum.
+-
+-        :param value: the string value to verify
+-        """
+-        if value not in cls.VALUES:
+-            raise ValueError(
+-                f"Invalid ParameterSource value: {value!r}. Valid"
+-                f" values are: {','.join(cls.VALUES)}"
+-            )
+-
+-
+-class Context:
+-    """The context is a special internal object that holds state relevant
+-    for the script execution at every single level.  It's normally invisible
+-    to commands unless they opt-in to getting access to it.
+-
+-    The context is useful as it can pass internal objects around and can
+-    control special execution features such as reading data from
+-    environment variables.
+-
+-    A context can be used as context manager in which case it will call
+-    :meth:`close` on teardown.
+-
+-    .. versionadded:: 2.0
+-       Added the `resilient_parsing`, `help_option_names`,
+-       `token_normalize_func` parameters.
+-
+-    .. versionadded:: 3.0
+-       Added the `allow_extra_args` and `allow_interspersed_args`
+-       parameters.
+-
+-    .. versionadded:: 4.0
+-       Added the `color`, `ignore_unknown_options`, and
+-       `max_content_width` parameters.
+-
+-    .. versionadded:: 7.1
+-       Added the `show_default` parameter.
+-
+-    :param command: the command class for this context.
+-    :param parent: the parent context.
+-    :param info_name: the info name for this invocation.  Generally this
+-                      is the most descriptive name for the script or
+-                      command.  For the toplevel script it is usually
+-                      the name of the script, for commands below it it's
+-                      the name of the script.
+-    :param obj: an arbitrary object of user data.
+-    :param auto_envvar_prefix: the prefix to use for automatic environment
+-                               variables.  If this is `None` then reading
+-                               from environment variables is disabled.  This
+-                               does not affect manually set environment
+-                               variables which are always read.
+-    :param default_map: a dictionary (like object) with default values
+-                        for parameters.
+-    :param terminal_width: the width of the terminal.  The default is
+-                           inherit from parent context.  If no context
+-                           defines the terminal width then auto
+-                           detection will be applied.
+-    :param max_content_width: the maximum width for content rendered by
+-                              Click (this currently only affects help
+-                              pages).  This defaults to 80 characters if
+-                              not overridden.  In other words: even if the
+-                              terminal is larger than that, Click will not
+-                              format things wider than 80 characters by
+-                              default.  In addition to that, formatters might
+-                              add some safety mapping on the right.
+-    :param resilient_parsing: if this flag is enabled then Click will
+-                              parse without any interactivity or callback
+-                              invocation.  Default values will also be
+-                              ignored.  This is useful for implementing
+-                              things such as completion support.
+-    :param allow_extra_args: if this is set to `True` then extra arguments
+-                             at the end will not raise an error and will be
+-                             kept on the context.  The default is to inherit
+-                             from the command.
+-    :param allow_interspersed_args: if this is set to `False` then options
+-                                    and arguments cannot be mixed.  The
+-                                    default is to inherit from the command.
+-    :param ignore_unknown_options: instructs click to ignore options it does
+-                                   not know and keeps them for later
+-                                   processing.
+-    :param help_option_names: optionally a list of strings that define how
+-                              the default help parameter is named.  The
+-                              default is ``['--help']``.
+-    :param token_normalize_func: an optional function that is used to
+-                                 normalize tokens (options, choices,
+-                                 etc.).  This for instance can be used to
+-                                 implement case insensitive behavior.
+-    :param color: controls if the terminal supports ANSI colors or not.  The
+-                  default is autodetection.  This is only needed if ANSI
+-                  codes are used in texts that Click prints which is by
+-                  default not the case.  This for instance would affect
+-                  help output.
+-    :param show_default: if True, shows defaults for all options.
+-                    Even if an option is later created with show_default=False,
+-                    this command-level setting overrides it.
+-    """
+-
+-    def __init__(
+-        self,
+-        command,
+-        parent=None,
+-        info_name=None,
+-        obj=None,
+-        auto_envvar_prefix=None,
+-        default_map=None,
+-        terminal_width=None,
+-        max_content_width=None,
+-        resilient_parsing=False,
+-        allow_extra_args=None,
+-        allow_interspersed_args=None,
+-        ignore_unknown_options=None,
+-        help_option_names=None,
+-        token_normalize_func=None,
+-        color=None,
+-        show_default=None,
+-    ):
+-        #: the parent context or `None` if none exists.
+-        self.parent = parent
+-        #: the :class:`Command` for this context.
+-        self.command = command
+-        #: the descriptive information name
+-        self.info_name = info_name
+-        #: the parsed parameters except if the value is hidden in which
+-        #: case it's not remembered.
+-        self.params = {}
+-        #: the leftover arguments.
+-        self.args = []
+-        #: protected arguments.  These are arguments that are prepended
+-        #: to `args` when certain parsing scenarios are encountered but
+-        #: must be never propagated to another arguments.  This is used
+-        #: to implement nested parsing.
+-        self.protected_args = []
+-        if obj is None and parent is not None:
+-            obj = parent.obj
+-        #: the user object stored.
+-        self.obj = obj
+-        self._meta = getattr(parent, "meta", {})
+-
+-        #: A dictionary (-like object) with defaults for parameters.
+-        if (
+-            default_map is None
+-            and parent is not None
+-            and parent.default_map is not None
+-        ):
+-            default_map = parent.default_map.get(info_name)
+-        self.default_map = default_map
+-
+-        #: This flag indicates if a subcommand is going to be executed. A
+-        #: group callback can use this information to figure out if it's
+-        #: being executed directly or because the execution flow passes
+-        #: onwards to a subcommand. By default it's None, but it can be
+-        #: the name of the subcommand to execute.
+-        #:
+-        #: If chaining is enabled this will be set to ``'*'`` in case
+-        #: any commands are executed.  It is however not possible to
+-        #: figure out which ones.  If you require this knowledge you
+-        #: should use a :func:`resultcallback`.
+-        self.invoked_subcommand = None
+-
+-        if terminal_width is None and parent is not None:
+-            terminal_width = parent.terminal_width
+-        #: The width of the terminal (None is autodetection).
+-        self.terminal_width = terminal_width
+-
+-        if max_content_width is None and parent is not None:
+-            max_content_width = parent.max_content_width
+-        #: The maximum width of formatted content (None implies a sensible
+-        #: default which is 80 for most things).
+-        self.max_content_width = max_content_width
+-
+-        if allow_extra_args is None:
+-            allow_extra_args = command.allow_extra_args
+-        #: Indicates if the context allows extra args or if it should
+-        #: fail on parsing.
+-        #:
+-        #: .. versionadded:: 3.0
+-        self.allow_extra_args = allow_extra_args
+-
+-        if allow_interspersed_args is None:
+-            allow_interspersed_args = command.allow_interspersed_args
+-        #: Indicates if the context allows mixing of arguments and
+-        #: options or not.
+-        #:
+-        #: .. versionadded:: 3.0
+-        self.allow_interspersed_args = allow_interspersed_args
+-
+-        if ignore_unknown_options is None:
+-            ignore_unknown_options = command.ignore_unknown_options
+-        #: Instructs click to ignore options that a command does not
+-        #: understand and will store it on the context for later
+-        #: processing.  This is primarily useful for situations where you
+-        #: want to call into external programs.  Generally this pattern is
+-        #: strongly discouraged because it's not possibly to losslessly
+-        #: forward all arguments.
+-        #:
+-        #: .. versionadded:: 4.0
+-        self.ignore_unknown_options = ignore_unknown_options
+-
+-        if help_option_names is None:
+-            if parent is not None:
+-                help_option_names = parent.help_option_names
+-            else:
+-                help_option_names = ["--help"]
+-
+-        #: The names for the help options.
+-        self.help_option_names = help_option_names
+-
+-        if token_normalize_func is None and parent is not None:
+-            token_normalize_func = parent.token_normalize_func
+-
+-        #: An optional normalization function for tokens.  This is
+-        #: options, choices, commands etc.
+-        self.token_normalize_func = token_normalize_func
+-
+-        #: Indicates if resilient parsing is enabled.  In that case Click
+-        #: will do its best to not cause any failures and default values
+-        #: will be ignored. Useful for completion.
+-        self.resilient_parsing = resilient_parsing
+-
+-        # If there is no envvar prefix yet, but the parent has one and
+-        # the command on this level has a name, we can expand the envvar
+-        # prefix automatically.
+-        if auto_envvar_prefix is None:
+-            if (
+-                parent is not None
+-                and parent.auto_envvar_prefix is not None
+-                and self.info_name is not None
+-            ):
+-                auto_envvar_prefix = (
+-                    f"{parent.auto_envvar_prefix}_{self.info_name.upper()}"
+-                )
+-        else:
+-            auto_envvar_prefix = auto_envvar_prefix.upper()
+-        if auto_envvar_prefix is not None:
+-            auto_envvar_prefix = auto_envvar_prefix.replace("-", "_")
+-        self.auto_envvar_prefix = auto_envvar_prefix
+-
+-        if color is None and parent is not None:
+-            color = parent.color
+-
+-        #: Controls if styling output is wanted or not.
+-        self.color = color
+-
+-        self.show_default = show_default
+-
+-        self._close_callbacks = []
+-        self._depth = 0
+-        self._source_by_paramname = {}
+-
+-    def __enter__(self):
+-        self._depth += 1
+-        push_context(self)
+-        return self
+-
+-    def __exit__(self, exc_type, exc_value, tb):
+-        self._depth -= 1
+-        if self._depth == 0:
+-            self.close()
+-        pop_context()
+-
+-    @contextmanager
+-    def scope(self, cleanup=True):
+-        """This helper method can be used with the context object to promote
+-        it to the current thread local (see :func:`get_current_context`).
+-        The default behavior of this is to invoke the cleanup functions which
+-        can be disabled by setting `cleanup` to `False`.  The cleanup
+-        functions are typically used for things such as closing file handles.
+-
+-        If the cleanup is intended the context object can also be directly
+-        used as a context manager.
+-
+-        Example usage::
+-
+-            with ctx.scope():
+-                assert get_current_context() is ctx
+-
+-        This is equivalent::
+-
+-            with ctx:
+-                assert get_current_context() is ctx
+-
+-        .. versionadded:: 5.0
+-
+-        :param cleanup: controls if the cleanup functions should be run or
+-                        not.  The default is to run these functions.  In
+-                        some situations the context only wants to be
+-                        temporarily pushed in which case this can be disabled.
+-                        Nested pushes automatically defer the cleanup.
+-        """
+-        if not cleanup:
+-            self._depth += 1
+-        try:
+-            with self as rv:
+-                yield rv
+-        finally:
+-            if not cleanup:
+-                self._depth -= 1
+-
+-    @property
+-    def meta(self):
+-        """This is a dictionary which is shared with all the contexts
+-        that are nested.  It exists so that click utilities can store some
+-        state here if they need to.  It is however the responsibility of
+-        that code to manage this dictionary well.
+-
+-        The keys are supposed to be unique dotted strings.  For instance
+-        module paths are a good choice for it.  What is stored in there is
+-        irrelevant for the operation of click.  However what is important is
+-        that code that places data here adheres to the general semantics of
+-        the system.
+-
+-        Example usage::
+-
+-            LANG_KEY = f'{__name__}.lang'
+-
+-            def set_language(value):
+-                ctx = get_current_context()
+-                ctx.meta[LANG_KEY] = value
+-
+-            def get_language():
+-                return get_current_context().meta.get(LANG_KEY, 'en_US')
+-
+-        .. versionadded:: 5.0
+-        """
+-        return self._meta
+-
+-    def make_formatter(self):
+-        """Creates the formatter for the help and usage output."""
+-        return HelpFormatter(
+-            width=self.terminal_width, max_width=self.max_content_width
+-        )
+-
+-    def call_on_close(self, f):
+-        """This decorator remembers a function as callback that should be
+-        executed when the context tears down.  This is most useful to bind
+-        resource handling to the script execution.  For instance, file objects
+-        opened by the :class:`File` type will register their close callbacks
+-        here.
+-
+-        :param f: the function to execute on teardown.
+-        """
+-        self._close_callbacks.append(f)
+-        return f
+-
+-    def close(self):
+-        """Invokes all close callbacks."""
+-        for cb in self._close_callbacks:
+-            cb()
+-        self._close_callbacks = []
+-
+-    @property
+-    def command_path(self):
+-        """The computed command path.  This is used for the ``usage``
+-        information on the help page.  It's automatically created by
+-        combining the info names of the chain of contexts to the root.
+-        """
+-        rv = ""
+-        if self.info_name is not None:
+-            rv = self.info_name
+-        if self.parent is not None:
+-            rv = f"{self.parent.command_path} {rv}"
+-        return rv.lstrip()
+-
+-    def find_root(self):
+-        """Finds the outermost context."""
+-        node = self
+-        while node.parent is not None:
+-            node = node.parent
+-        return node
+-
+-    def find_object(self, object_type):
+-        """Finds the closest object of a given type."""
+-        node = self
+-        while node is not None:
+-            if isinstance(node.obj, object_type):
+-                return node.obj
+-            node = node.parent
+-
+-    def ensure_object(self, object_type):
+-        """Like :meth:`find_object` but sets the innermost object to a
+-        new instance of `object_type` if it does not exist.
+-        """
+-        rv = self.find_object(object_type)
+-        if rv is None:
+-            self.obj = rv = object_type()
+-        return rv
+-
+-    def lookup_default(self, name):
+-        """Looks up the default for a parameter name.  This by default
+-        looks into the :attr:`default_map` if available.
+-        """
+-        if self.default_map is not None:
+-            rv = self.default_map.get(name)
+-            if callable(rv):
+-                rv = rv()
+-            return rv
+-
+-    def fail(self, message):
+-        """Aborts the execution of the program with a specific error
+-        message.
+-
+-        :param message: the error message to fail with.
+-        """
+-        raise UsageError(message, self)
+-
+-    def abort(self):
+-        """Aborts the script."""
+-        raise Abort()
+-
+-    def exit(self, code=0):
+-        """Exits the application with a given exit code."""
+-        raise Exit(code)
+-
+-    def get_usage(self):
+-        """Helper method to get formatted usage string for the current
+-        context and command.
+-        """
+-        return self.command.get_usage(self)
+-
+-    def get_help(self):
+-        """Helper method to get formatted help page for the current
+-        context and command.
+-        """
+-        return self.command.get_help(self)
+-
+-    def invoke(*args, **kwargs):  # noqa: B902
+-        """Invokes a command callback in exactly the way it expects.  There
+-        are two ways to invoke this method:
+-
+-        1.  the first argument can be a callback and all other arguments and
+-            keyword arguments are forwarded directly to the function.
+-        2.  the first argument is a click command object.  In that case all
+-            arguments are forwarded as well but proper click parameters
+-            (options and click arguments) must be keyword arguments and Click
+-            will fill in defaults.
+-
+-        Note that before Click 3.2 keyword arguments were not properly filled
+-        in against the intention of this code and no context was created.  For
+-        more information about this change and why it was done in a bugfix
+-        release see :ref:`upgrade-to-3.2`.
+-        """
+-        self, callback = args[:2]
+-        ctx = self
+-
+-        # It's also possible to invoke another command which might or
+-        # might not have a callback.  In that case we also fill
+-        # in defaults and make a new context for this command.
+-        if isinstance(callback, Command):
+-            other_cmd = callback
+-            callback = other_cmd.callback
+-            ctx = Context(other_cmd, info_name=other_cmd.name, parent=self)
+-            if callback is None:
+-                raise TypeError(
+-                    "The given command does not have a callback that can be invoked."
+-                )
+-
+-            for param in other_cmd.params:
+-                if param.name not in kwargs and param.expose_value:
+-                    kwargs[param.name] = param.get_default(ctx)
+-
+-        args = args[2:]
+-        with augment_usage_errors(self):
+-            with ctx:
+-                return callback(*args, **kwargs)
+-
+-    def forward(*args, **kwargs):  # noqa: B902
+-        """Similar to :meth:`invoke` but fills in default keyword
+-        arguments from the current context if the other command expects
+-        it.  This cannot invoke callbacks directly, only other commands.
+-        """
+-        self, cmd = args[:2]
+-
+-        # It's also possible to invoke another command which might or
+-        # might not have a callback.
+-        if not isinstance(cmd, Command):
+-            raise TypeError("Callback is not a command.")
+-
+-        for param in self.params:
+-            if param not in kwargs:
+-                kwargs[param] = self.params[param]
+-
+-        return self.invoke(cmd, **kwargs)
+-
+-    def set_parameter_source(self, name, source):
+-        """Set the source of a parameter.
+-
+-        This indicates the location from which the value of the
+-        parameter was obtained.
+-
+-        :param name: the name of the command line parameter
+-        :param source: the source of the command line parameter, which
+-                       should be a valid ParameterSource value
+-        """
+-        ParameterSource.validate(source)
+-        self._source_by_paramname[name] = source
+-
+-    def get_parameter_source(self, name):
+-        """Get the source of a parameter.
+-
+-        This indicates the location from which the value of the
+-        parameter was obtained.  This can be useful for determining
+-        when a user specified an option on the command line that is
+-        the same as the default.  In that case, the source would be
+-        ParameterSource.COMMANDLINE, even though the value of the
+-        parameter was equivalent to the default.
+-
+-        :param name: the name of the command line parameter
+-        :returns: the source
+-        :rtype: ParameterSource
+-        """
+-        return self._source_by_paramname[name]
+-
+-
+-class BaseCommand:
+-    """The base command implements the minimal API contract of commands.
+-    Most code will never use this as it does not implement a lot of useful
+-    functionality but it can act as the direct subclass of alternative
+-    parsing methods that do not depend on the Click parser.
+-
+-    For instance, this can be used to bridge Click and other systems like
+-    argparse or docopt.
+-
+-    Because base commands do not implement a lot of the API that other
+-    parts of Click take for granted, they are not supported for all
+-    operations.  For instance, they cannot be used with the decorators
+-    usually and they have no built-in callback system.
+-
+-    .. versionchanged:: 2.0
+-       Added the `context_settings` parameter.
+-
+-    :param name: the name of the command to use unless a group overrides it.
+-    :param context_settings: an optional dictionary with defaults that are
+-                             passed to the context object.
+-    """
+-
+-    #: the default for the :attr:`Context.allow_extra_args` flag.
+-    allow_extra_args = False
+-    #: the default for the :attr:`Context.allow_interspersed_args` flag.
+-    allow_interspersed_args = True
+-    #: the default for the :attr:`Context.ignore_unknown_options` flag.
+-    ignore_unknown_options = False
+-
+-    def __init__(self, name, context_settings=None):
+-        #: the name the command thinks it has.  Upon registering a command
+-        #: on a :class:`Group` the group will default the command name
+-        #: with this information.  You should instead use the
+-        #: :class:`Context`\'s :attr:`~Context.info_name` attribute.
+-        self.name = name
+-        if context_settings is None:
+-            context_settings = {}
+-        #: an optional dictionary with defaults passed to the context.
+-        self.context_settings = context_settings
+-
+-    def __repr__(self):
+-        return f"<{self.__class__.__name__} {self.name}>"
+-
+-    def get_usage(self, ctx):
+-        raise NotImplementedError("Base commands cannot get usage")
+-
+-    def get_help(self, ctx):
+-        raise NotImplementedError("Base commands cannot get help")
+-
+-    def make_context(self, info_name, args, parent=None, **extra):
+-        """This function when given an info name and arguments will kick
+-        off the parsing and create a new :class:`Context`.  It does not
+-        invoke the actual command callback though.
+-
+-        :param info_name: the info name for this invokation.  Generally this
+-                          is the most descriptive name for the script or
+-                          command.  For the toplevel script it's usually
+-                          the name of the script, for commands below it it's
+-                          the name of the script.
+-        :param args: the arguments to parse as list of strings.
+-        :param parent: the parent context if available.
+-        :param extra: extra keyword arguments forwarded to the context
+-                      constructor.
+-        """
+-        for key, value in self.context_settings.items():
+-            if key not in extra:
+-                extra[key] = value
+-        ctx = Context(self, info_name=info_name, parent=parent, **extra)
+-        with ctx.scope(cleanup=False):
+-            self.parse_args(ctx, args)
+-        return ctx
+-
+-    def parse_args(self, ctx, args):
+-        """Given a context and a list of arguments this creates the parser
+-        and parses the arguments, then modifies the context as necessary.
+-        This is automatically invoked by :meth:`make_context`.
+-        """
+-        raise NotImplementedError("Base commands do not know how to parse arguments.")
+-
+-    def invoke(self, ctx):
+-        """Given a context, this invokes the command.  The default
+-        implementation is raising a not implemented error.
+-        """
+-        raise NotImplementedError("Base commands are not invokable by default")
+-
+-    def main(
+-        self,
+-        args=None,
+-        prog_name=None,
+-        complete_var=None,
+-        standalone_mode=True,
+-        **extra,
+-    ):
+-        """This is the way to invoke a script with all the bells and
+-        whistles as a command line application.  This will always terminate
+-        the application after a call.  If this is not wanted, ``SystemExit``
+-        needs to be caught.
+-
+-        This method is also available by directly calling the instance of
+-        a :class:`Command`.
+-
+-        .. versionadded:: 3.0
+-           Added the `standalone_mode` flag to control the standalone mode.
+-
+-        :param args: the arguments that should be used for parsing.  If not
+-                     provided, ``sys.argv[1:]`` is used.
+-        :param prog_name: the program name that should be used.  By default
+-                          the program name is constructed by taking the file
+-                          name from ``sys.argv[0]``.
+-        :param complete_var: the environment variable that controls the
+-                             bash completion support.  The default is
+-                             ``"_<prog_name>_COMPLETE"`` with prog_name in
+-                             uppercase.
+-        :param standalone_mode: the default behavior is to invoke the script
+-                                in standalone mode.  Click will then
+-                                handle exceptions and convert them into
+-                                error messages and the function will never
+-                                return but shut down the interpreter.  If
+-                                this is set to `False` they will be
+-                                propagated to the caller and the return
+-                                value of this function is the return value
+-                                of :meth:`invoke`.
+-        :param extra: extra keyword arguments are forwarded to the context
+-                      constructor.  See :class:`Context` for more information.
+-        """
+-        # Verify that the environment is configured correctly, or reject
+-        # further execution to avoid a broken script.
+-        _verify_python_env()
+-
+-        if args is None:
+-            args = sys.argv[1:]
+-        else:
+-            args = list(args)
+-
+-        if prog_name is None:
+-            prog_name = make_str(
+-                os.path.basename(sys.argv[0] if sys.argv else __file__)
+-            )
+-
+-        # Hook for the Bash completion.  This only activates if the Bash
+-        # completion is actually enabled, otherwise this is quite a fast
+-        # noop.
+-        _bashcomplete(self, prog_name, complete_var)
+-
+-        try:
+-            try:
+-                with self.make_context(prog_name, args, **extra) as ctx:
+-                    rv = self.invoke(ctx)
+-                    if not standalone_mode:
+-                        return rv
+-                    # it's not safe to `ctx.exit(rv)` here!
+-                    # note that `rv` may actually contain data like "1" which
+-                    # has obvious effects
+-                    # more subtle case: `rv=[None, None]` can come out of
+-                    # chained commands which all returned `None` -- so it's not
+-                    # even always obvious that `rv` indicates success/failure
+-                    # by its truthiness/falsiness
+-                    ctx.exit()
+-            except (EOFError, KeyboardInterrupt):
+-                echo(file=sys.stderr)
+-                raise Abort()
+-            except ClickException as e:
+-                if not standalone_mode:
+-                    raise
+-                e.show()
+-                sys.exit(e.exit_code)
+-            except OSError as e:
+-                if e.errno == errno.EPIPE:
+-                    sys.stdout = PacifyFlushWrapper(sys.stdout)
+-                    sys.stderr = PacifyFlushWrapper(sys.stderr)
+-                    sys.exit(1)
+-                else:
+-                    raise
+-        except Exit as e:
+-            if standalone_mode:
+-                sys.exit(e.exit_code)
+-            else:
+-                # in non-standalone mode, return the exit code
+-                # note that this is only reached if `self.invoke` above raises
+-                # an Exit explicitly -- thus bypassing the check there which
+-                # would return its result
+-                # the results of non-standalone execution may therefore be
+-                # somewhat ambiguous: if there are codepaths which lead to
+-                # `ctx.exit(1)` and to `return 1`, the caller won't be able to
+-                # tell the difference between the two
+-                return e.exit_code
+-        except Abort:
+-            if not standalone_mode:
+-                raise
+-            echo("Aborted!", file=sys.stderr)
+-            sys.exit(1)
+-
+-    def __call__(self, *args, **kwargs):
+-        """Alias for :meth:`main`."""
+-        return self.main(*args, **kwargs)
+-
+-
+-class Command(BaseCommand):
+-    """Commands are the basic building block of command line interfaces in
+-    Click.  A basic command handles command line parsing and might dispatch
+-    more parsing to commands nested below it.
+-
+-    .. versionchanged:: 2.0
+-       Added the `context_settings` parameter.
+-    .. versionchanged:: 8.0
+-       Added repr showing the command name
+-    .. versionchanged:: 7.1
+-       Added the `no_args_is_help` parameter.
+-
+-    :param name: the name of the command to use unless a group overrides it.
+-    :param context_settings: an optional dictionary with defaults that are
+-                             passed to the context object.
+-    :param callback: the callback to invoke.  This is optional.
+-    :param params: the parameters to register with this command.  This can
+-                   be either :class:`Option` or :class:`Argument` objects.
+-    :param help: the help string to use for this command.
+-    :param epilog: like the help string but it's printed at the end of the
+-                   help page after everything else.
+-    :param short_help: the short help to use for this command.  This is
+-                       shown on the command listing of the parent command.
+-    :param add_help_option: by default each command registers a ``--help``
+-                            option.  This can be disabled by this parameter.
+-    :param no_args_is_help: this controls what happens if no arguments are
+-                            provided.  This option is disabled by default.
+-                            If enabled this will add ``--help`` as argument
+-                            if no arguments are passed
+-    :param hidden: hide this command from help outputs.
+-
+-    :param deprecated: issues a message indicating that
+-                             the command is deprecated.
+-    """
+-
+-    def __init__(
+-        self,
+-        name,
+-        context_settings=None,
+-        callback=None,
+-        params=None,
+-        help=None,
+-        epilog=None,
+-        short_help=None,
+-        options_metavar="[OPTIONS]",
+-        add_help_option=True,
+-        no_args_is_help=False,
+-        hidden=False,
+-        deprecated=False,
+-    ):
+-        BaseCommand.__init__(self, name, context_settings)
+-        #: the callback to execute when the command fires.  This might be
+-        #: `None` in which case nothing happens.
+-        self.callback = callback
+-        #: the list of parameters for this command in the order they
+-        #: should show up in the help page and execute.  Eager parameters
+-        #: will automatically be handled before non eager ones.
+-        self.params = params or []
+-        # if a form feed (page break) is found in the help text, truncate help
+-        # text to the content preceding the first form feed
+-        if help and "\f" in help:
+-            help = help.split("\f", 1)[0]
+-        self.help = help
+-        self.epilog = epilog
+-        self.options_metavar = options_metavar
+-        self.short_help = short_help
+-        self.add_help_option = add_help_option
+-        self.no_args_is_help = no_args_is_help
+-        self.hidden = hidden
+-        self.deprecated = deprecated
+-
+-    def __repr__(self):
+-        return f"<{self.__class__.__name__} {self.name}>"
+-
+-    def get_usage(self, ctx):
+-        """Formats the usage line into a string and returns it.
+-
+-        Calls :meth:`format_usage` internally.
+-        """
+-        formatter = ctx.make_formatter()
+-        self.format_usage(ctx, formatter)
+-        return formatter.getvalue().rstrip("\n")
+-
+-    def get_params(self, ctx):
+-        rv = self.params
+-        help_option = self.get_help_option(ctx)
+-        if help_option is not None:
+-            rv = rv + [help_option]
+-        return rv
+-
+-    def format_usage(self, ctx, formatter):
+-        """Writes the usage line into the formatter.
+-
+-        This is a low-level method called by :meth:`get_usage`.
+-        """
+-        pieces = self.collect_usage_pieces(ctx)
+-        formatter.write_usage(ctx.command_path, " ".join(pieces))
+-
+-    def collect_usage_pieces(self, ctx):
+-        """Returns all the pieces that go into the usage line and returns
+-        it as a list of strings.
+-        """
+-        rv = [self.options_metavar]
+-        for param in self.get_params(ctx):
+-            rv.extend(param.get_usage_pieces(ctx))
+-        return rv
+-
+-    def get_help_option_names(self, ctx):
+-        """Returns the names for the help option."""
+-        all_names = set(ctx.help_option_names)
+-        for param in self.params:
+-            all_names.difference_update(param.opts)
+-            all_names.difference_update(param.secondary_opts)
+-        return all_names
+-
+-    def get_help_option(self, ctx):
+-        """Returns the help option object."""
+-        help_options = self.get_help_option_names(ctx)
+-        if not help_options or not self.add_help_option:
+-            return
+-
+-        def show_help(ctx, param, value):
+-            if value and not ctx.resilient_parsing:
+-                echo(ctx.get_help(), color=ctx.color)
+-                ctx.exit()
+-
+-        return Option(
+-            help_options,
+-            is_flag=True,
+-            is_eager=True,
+-            expose_value=False,
+-            callback=show_help,
+-            help="Show this message and exit.",
+-        )
+-
+-    def make_parser(self, ctx):
+-        """Creates the underlying option parser for this command."""
+-        parser = OptionParser(ctx)
+-        for param in self.get_params(ctx):
+-            param.add_to_parser(parser, ctx)
+-        return parser
+-
+-    def get_help(self, ctx):
+-        """Formats the help into a string and returns it.
+-
+-        Calls :meth:`format_help` internally.
+-        """
+-        formatter = ctx.make_formatter()
+-        self.format_help(ctx, formatter)
+-        return formatter.getvalue().rstrip("\n")
+-
+-    def get_short_help_str(self, limit=45):
+-        """Gets short help for the command or makes it by shortening the
+-        long help string.
+-        """
+-        return (
+-            self.short_help
+-            or self.help
+-            and make_default_short_help(self.help, limit)
+-            or ""
+-        )
+-
+-    def format_help(self, ctx, formatter):
+-        """Writes the help into the formatter if it exists.
+-
+-        This is a low-level method called by :meth:`get_help`.
+-
+-        This calls the following methods:
+-
+-        -   :meth:`format_usage`
+-        -   :meth:`format_help_text`
+-        -   :meth:`format_options`
+-        -   :meth:`format_epilog`
+-        """
+-        self.format_usage(ctx, formatter)
+-        self.format_help_text(ctx, formatter)
+-        self.format_options(ctx, formatter)
+-        self.format_epilog(ctx, formatter)
+-
+-    def format_help_text(self, ctx, formatter):
+-        """Writes the help text to the formatter if it exists."""
+-        if self.help:
+-            formatter.write_paragraph()
+-            with formatter.indentation():
+-                help_text = self.help
+-                if self.deprecated:
+-                    help_text += DEPRECATED_HELP_NOTICE
+-                formatter.write_text(help_text)
+-        elif self.deprecated:
+-            formatter.write_paragraph()
+-            with formatter.indentation():
+-                formatter.write_text(DEPRECATED_HELP_NOTICE)
+-
+-    def format_options(self, ctx, formatter):
+-        """Writes all the options into the formatter if they exist."""
+-        opts = []
+-        for param in self.get_params(ctx):
+-            rv = param.get_help_record(ctx)
+-            if rv is not None:
+-                opts.append(rv)
+-
+-        if opts:
+-            with formatter.section("Options"):
+-                formatter.write_dl(opts)
+-
+-    def format_epilog(self, ctx, formatter):
+-        """Writes the epilog into the formatter if it exists."""
+-        if self.epilog:
+-            formatter.write_paragraph()
+-            with formatter.indentation():
+-                formatter.write_text(self.epilog)
+-
+-    def parse_args(self, ctx, args):
+-        if not args and self.no_args_is_help and not ctx.resilient_parsing:
+-            echo(ctx.get_help(), color=ctx.color)
+-            ctx.exit()
+-
+-        parser = self.make_parser(ctx)
+-        opts, args, param_order = parser.parse_args(args=args)
+-
+-        for param in iter_params_for_processing(param_order, self.get_params(ctx)):
+-            value, args = param.handle_parse_result(ctx, opts, args)
+-
+-        if args and not ctx.allow_extra_args and not ctx.resilient_parsing:
+-            ctx.fail(
+-                "Got unexpected extra"
+-                f" argument{'s' if len(args) != 1 else ''}"
+-                f" ({' '.join(map(make_str, args))})"
+-            )
+-
+-        ctx.args = args
+-        return args
+-
+-    def invoke(self, ctx):
+-        """Given a context, this invokes the attached callback (if it exists)
+-        in the right way.
+-        """
+-        _maybe_show_deprecated_notice(self)
+-        if self.callback is not None:
+-            return ctx.invoke(self.callback, **ctx.params)
+-
+-
+-class MultiCommand(Command):
+-    """A multi command is the basic implementation of a command that
+-    dispatches to subcommands.  The most common version is the
+-    :class:`Group`.
+-
+-    :param invoke_without_command: this controls how the multi command itself
+-                                   is invoked.  By default it's only invoked
+-                                   if a subcommand is provided.
+-    :param no_args_is_help: this controls what happens if no arguments are
+-                            provided.  This option is enabled by default if
+-                            `invoke_without_command` is disabled or disabled
+-                            if it's enabled.  If enabled this will add
+-                            ``--help`` as argument if no arguments are
+-                            passed.
+-    :param subcommand_metavar: the string that is used in the documentation
+-                               to indicate the subcommand place.
+-    :param chain: if this is set to `True` chaining of multiple subcommands
+-                  is enabled.  This restricts the form of commands in that
+-                  they cannot have optional arguments but it allows
+-                  multiple commands to be chained together.
+-    :param result_callback: the result callback to attach to this multi
+-                            command.
+-    """
+-
+-    allow_extra_args = True
+-    allow_interspersed_args = False
+-
+-    def __init__(
+-        self,
+-        name=None,
+-        invoke_without_command=False,
+-        no_args_is_help=None,
+-        subcommand_metavar=None,
+-        chain=False,
+-        result_callback=None,
+-        **attrs,
+-    ):
+-        Command.__init__(self, name, **attrs)
+-        if no_args_is_help is None:
+-            no_args_is_help = not invoke_without_command
+-        self.no_args_is_help = no_args_is_help
+-        self.invoke_without_command = invoke_without_command
+-        if subcommand_metavar is None:
+-            if chain:
+-                subcommand_metavar = SUBCOMMANDS_METAVAR
+-            else:
+-                subcommand_metavar = SUBCOMMAND_METAVAR
+-        self.subcommand_metavar = subcommand_metavar
+-        self.chain = chain
+-        #: The result callback that is stored.  This can be set or
+-        #: overridden with the :func:`resultcallback` decorator.
+-        self.result_callback = result_callback
+-
+-        if self.chain:
+-            for param in self.params:
+-                if isinstance(param, Argument) and not param.required:
+-                    raise RuntimeError(
+-                        "Multi commands in chain mode cannot have"
+-                        " optional arguments."
+-                    )
+-
+-    def collect_usage_pieces(self, ctx):
+-        rv = Command.collect_usage_pieces(self, ctx)
+-        rv.append(self.subcommand_metavar)
+-        return rv
+-
+-    def format_options(self, ctx, formatter):
+-        Command.format_options(self, ctx, formatter)
+-        self.format_commands(ctx, formatter)
+-
+-    def resultcallback(self, replace=False):
+-        """Adds a result callback to the chain command.  By default if a
+-        result callback is already registered this will chain them but
+-        this can be disabled with the `replace` parameter.  The result
+-        callback is invoked with the return value of the subcommand
+-        (or the list of return values from all subcommands if chaining
+-        is enabled) as well as the parameters as they would be passed
+-        to the main callback.
+-
+-        Example::
+-
+-            @click.group()
+-            @click.option('-i', '--input', default=23)
+-            def cli(input):
+-                return 42
+-
+-            @cli.resultcallback()
+-            def process_result(result, input):
+-                return result + input
+-
+-        .. versionadded:: 3.0
+-
+-        :param replace: if set to `True` an already existing result
+-                        callback will be removed.
+-        """
+-
+-        def decorator(f):
+-            old_callback = self.result_callback
+-            if old_callback is None or replace:
+-                self.result_callback = f
+-                return f
+-
+-            def function(__value, *args, **kwargs):
+-                return f(old_callback(__value, *args, **kwargs), *args, **kwargs)
+-
+-            self.result_callback = rv = update_wrapper(function, f)
+-            return rv
+-
+-        return decorator
+-
+-    def format_commands(self, ctx, formatter):
+-        """Extra format methods for multi methods that adds all the commands
+-        after the options.
+-        """
+-        commands = []
+-        for subcommand in self.list_commands(ctx):
+-            cmd = self.get_command(ctx, subcommand)
+-            # What is this, the tool lied about a command.  Ignore it
+-            if cmd is None:
+-                continue
+-            if cmd.hidden:
+-                continue
+-
+-            commands.append((subcommand, cmd))
+-
+-        # allow for 3 times the default spacing
+-        if len(commands):
+-            limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands)
+-
+-            rows = []
+-            for subcommand, cmd in commands:
+-                help = cmd.get_short_help_str(limit)
+-                rows.append((subcommand, help))
+-
+-            if rows:
+-                with formatter.section("Commands"):
+-                    formatter.write_dl(rows)
+-
+-    def parse_args(self, ctx, args):
+-        if not args and self.no_args_is_help and not ctx.resilient_parsing:
+-            echo(ctx.get_help(), color=ctx.color)
+-            ctx.exit()
+-
+-        rest = Command.parse_args(self, ctx, args)
+-        if self.chain:
+-            ctx.protected_args = rest
+-            ctx.args = []
+-        elif rest:
+-            ctx.protected_args, ctx.args = rest[:1], rest[1:]
+-
+-        return ctx.args
+-
+-    def invoke(self, ctx):
+-        def _process_result(value):
+-            if self.result_callback is not None:
+-                value = ctx.invoke(self.result_callback, value, **ctx.params)
+-            return value
+-
+-        if not ctx.protected_args:
+-            # If we are invoked without command the chain flag controls
+-            # how this happens.  If we are not in chain mode, the return
+-            # value here is the return value of the command.
+-            # If however we are in chain mode, the return value is the
+-            # return value of the result processor invoked with an empty
+-            # list (which means that no subcommand actually was executed).
+-            if self.invoke_without_command:
+-                if not self.chain:
+-                    return Command.invoke(self, ctx)
+-                with ctx:
+-                    Command.invoke(self, ctx)
+-                    return _process_result([])
+-            ctx.fail("Missing command.")
+-
+-        # Fetch args back out
+-        args = ctx.protected_args + ctx.args
+-        ctx.args = []
+-        ctx.protected_args = []
+-
+-        # If we're not in chain mode, we only allow the invocation of a
+-        # single command but we also inform the current context about the
+-        # name of the command to invoke.
+-        if not self.chain:
+-            # Make sure the context is entered so we do not clean up
+-            # resources until the result processor has worked.
+-            with ctx:
+-                cmd_name, cmd, args = self.resolve_command(ctx, args)
+-                ctx.invoked_subcommand = cmd_name
+-                Command.invoke(self, ctx)
+-                sub_ctx = cmd.make_context(cmd_name, args, parent=ctx)
+-                with sub_ctx:
+-                    return _process_result(sub_ctx.command.invoke(sub_ctx))
+-
+-        # In chain mode we create the contexts step by step, but after the
+-        # base command has been invoked.  Because at that point we do not
+-        # know the subcommands yet, the invoked subcommand attribute is
+-        # set to ``*`` to inform the command that subcommands are executed
+-        # but nothing else.
+-        with ctx:
+-            ctx.invoked_subcommand = "*" if args else None
+-            Command.invoke(self, ctx)
+-
+-            # Otherwise we make every single context and invoke them in a
+-            # chain.  In that case the return value to the result processor
+-            # is the list of all invoked subcommand's results.
+-            contexts = []
+-            while args:
+-                cmd_name, cmd, args = self.resolve_command(ctx, args)
+-                sub_ctx = cmd.make_context(
+-                    cmd_name,
+-                    args,
+-                    parent=ctx,
+-                    allow_extra_args=True,
+-                    allow_interspersed_args=False,
+-                )
+-                contexts.append(sub_ctx)
+-                args, sub_ctx.args = sub_ctx.args, []
+-
+-            rv = []
+-            for sub_ctx in contexts:
+-                with sub_ctx:
+-                    rv.append(sub_ctx.command.invoke(sub_ctx))
+-            return _process_result(rv)
+-
+-    def resolve_command(self, ctx, args):
+-        cmd_name = make_str(args[0])
+-        original_cmd_name = cmd_name
+-
+-        # Get the command
+-        cmd = self.get_command(ctx, cmd_name)
+-
+-        # If we can't find the command but there is a normalization
+-        # function available, we try with that one.
+-        if cmd is None and ctx.token_normalize_func is not None:
+-            cmd_name = ctx.token_normalize_func(cmd_name)
+-            cmd = self.get_command(ctx, cmd_name)
+-
+-        # If we don't find the command we want to show an error message
+-        # to the user that it was not provided.  However, there is
+-        # something else we should do: if the first argument looks like
+-        # an option we want to kick off parsing again for arguments to
+-        # resolve things like --help which now should go to the main
+-        # place.
+-        if cmd is None and not ctx.resilient_parsing:
+-            if split_opt(cmd_name)[0]:
+-                self.parse_args(ctx, ctx.args)
+-            ctx.fail(f"No such command '{original_cmd_name}'.")
+-
+-        return cmd_name, cmd, args[1:]
+-
+-    def get_command(self, ctx, cmd_name):
+-        """Given a context and a command name, this returns a
+-        :class:`Command` object if it exists or returns `None`.
+-        """
+-        raise NotImplementedError()
+-
+-    def list_commands(self, ctx):
+-        """Returns a list of subcommand names in the order they should
+-        appear.
+-        """
+-        return []
+-
+-
+-class Group(MultiCommand):
+-    """A group allows a command to have subcommands attached.  This is the
+-    most common way to implement nesting in Click.
+-
+-    :param commands: a dictionary of commands.
+-    """
+-
+-    def __init__(self, name=None, commands=None, **attrs):
+-        MultiCommand.__init__(self, name, **attrs)
+-        #: the registered subcommands by their exported names.
+-        self.commands = commands or {}
+-
+-    def add_command(self, cmd, name=None):
+-        """Registers another :class:`Command` with this group.  If the name
+-        is not provided, the name of the command is used.
+-        """
+-        name = name or cmd.name
+-        if name is None:
+-            raise TypeError("Command has no name.")
+-        _check_multicommand(self, name, cmd, register=True)
+-        self.commands[name] = cmd
+-
+-    def command(self, *args, **kwargs):
+-        """A shortcut decorator for declaring and attaching a command to
+-        the group.  This takes the same arguments as :func:`command` but
+-        immediately registers the created command with this instance by
+-        calling into :meth:`add_command`.
+-        """
+-        from .decorators import command
+-
+-        def decorator(f):
+-            cmd = command(*args, **kwargs)(f)
+-            self.add_command(cmd)
+-            return cmd
+-
+-        return decorator
+-
+-    def group(self, *args, **kwargs):
+-        """A shortcut decorator for declaring and attaching a group to
+-        the group.  This takes the same arguments as :func:`group` but
+-        immediately registers the created command with this instance by
+-        calling into :meth:`add_command`.
+-        """
+-        from .decorators import group
+-
+-        def decorator(f):
+-            cmd = group(*args, **kwargs)(f)
+-            self.add_command(cmd)
+-            return cmd
+-
+-        return decorator
+-
+-    def get_command(self, ctx, cmd_name):
+-        return self.commands.get(cmd_name)
+-
+-    def list_commands(self, ctx):
+-        return sorted(self.commands)
+-
+-
+-class CommandCollection(MultiCommand):
+-    """A command collection is a multi command that merges multiple multi
+-    commands together into one.  This is a straightforward implementation
+-    that accepts a list of different multi commands as sources and
+-    provides all the commands for each of them.
+-    """
+-
+-    def __init__(self, name=None, sources=None, **attrs):
+-        MultiCommand.__init__(self, name, **attrs)
+-        #: The list of registered multi commands.
+-        self.sources = sources or []
+-
+-    def add_source(self, multi_cmd):
+-        """Adds a new multi command to the chain dispatcher."""
+-        self.sources.append(multi_cmd)
+-
+-    def get_command(self, ctx, cmd_name):
+-        for source in self.sources:
+-            rv = source.get_command(ctx, cmd_name)
+-            if rv is not None:
+-                if self.chain:
+-                    _check_multicommand(self, cmd_name, rv)
+-                return rv
+-
+-    def list_commands(self, ctx):
+-        rv = set()
+-        for source in self.sources:
+-            rv.update(source.list_commands(ctx))
+-        return sorted(rv)
+-
+-
+-class Parameter:
+-    r"""A parameter to a command comes in two versions: they are either
+-    :class:`Option`\s or :class:`Argument`\s.  Other subclasses are currently
+-    not supported by design as some of the internals for parsing are
+-    intentionally not finalized.
+-
+-    Some settings are supported by both options and arguments.
+-
+-    :param param_decls: the parameter declarations for this option or
+-                        argument.  This is a list of flags or argument
+-                        names.
+-    :param type: the type that should be used.  Either a :class:`ParamType`
+-                 or a Python type.  The later is converted into the former
+-                 automatically if supported.
+-    :param required: controls if this is optional or not.
+-    :param default: the default value if omitted.  This can also be a callable,
+-                    in which case it's invoked when the default is needed
+-                    without any arguments.
+-    :param callback: a callback that should be executed after the parameter
+-                     was matched.  This is called as ``fn(ctx, param,
+-                     value)`` and needs to return the value.
+-    :param nargs: the number of arguments to match.  If not ``1`` the return
+-                  value is a tuple instead of single value.  The default for
+-                  nargs is ``1`` (except if the type is a tuple, then it's
+-                  the arity of the tuple). If ``nargs=-1``, all remaining
+-                  parameters are collected.
+-    :param metavar: how the value is represented in the help page.
+-    :param expose_value: if this is `True` then the value is passed onwards
+-                         to the command callback and stored on the context,
+-                         otherwise it's skipped.
+-    :param is_eager: eager values are processed before non eager ones.  This
+-                     should not be set for arguments or it will inverse the
+-                     order of processing.
+-    :param envvar: a string or list of strings that are environment variables
+-                   that should be checked.
+-
+-    .. versionchanged:: 7.1
+-        Empty environment variables are ignored rather than taking the
+-        empty string value. This makes it possible for scripts to clear
+-        variables if they can't unset them.
+-
+-    .. versionchanged:: 2.0
+-        Changed signature for parameter callback to also be passed the
+-        parameter. The old callback format will still work, but it will
+-        raise a warning to give you a chance to migrate the code easier.
+-    """
+-    param_type_name = "parameter"
+-
+-    def __init__(
+-        self,
+-        param_decls=None,
+-        type=None,
+-        required=False,
+-        default=None,
+-        callback=None,
+-        nargs=None,
+-        metavar=None,
+-        expose_value=True,
+-        is_eager=False,
+-        envvar=None,
+-        autocompletion=None,
+-    ):
+-        self.name, self.opts, self.secondary_opts = self._parse_decls(
+-            param_decls or (), expose_value
+-        )
+-
+-        self.type = convert_type(type, default)
+-
+-        # Default nargs to what the type tells us if we have that
+-        # information available.
+-        if nargs is None:
+-            if self.type.is_composite:
+-                nargs = self.type.arity
+-            else:
+-                nargs = 1
+-
+-        self.required = required
+-        self.callback = callback
+-        self.nargs = nargs
+-        self.multiple = False
+-        self.expose_value = expose_value
+-        self.default = default
+-        self.is_eager = is_eager
+-        self.metavar = metavar
+-        self.envvar = envvar
+-        self.autocompletion = autocompletion
+-
+-    def __repr__(self):
+-        return f"<{self.__class__.__name__} {self.name}>"
+-
+-    @property
+-    def human_readable_name(self):
+-        """Returns the human readable name of this parameter.  This is the
+-        same as the name for options, but the metavar for arguments.
+-        """
+-        return self.name
+-
+-    def make_metavar(self):
+-        if self.metavar is not None:
+-            return self.metavar
+-        metavar = self.type.get_metavar(self)
+-        if metavar is None:
+-            metavar = self.type.name.upper()
+-        if self.nargs != 1:
+-            metavar += "..."
+-        return metavar
+-
+-    def get_default(self, ctx):
+-        """Given a context variable this calculates the default value."""
+-        # Otherwise go with the regular default.
+-        if callable(self.default):
+-            rv = self.default()
+-        else:
+-            rv = self.default
+-        return self.type_cast_value(ctx, rv)
+-
+-    def add_to_parser(self, parser, ctx):
+-        pass
+-
+-    def consume_value(self, ctx, opts):
+-        value = opts.get(self.name)
+-        source = ParameterSource.COMMANDLINE
+-        if value is None:
+-            value = self.value_from_envvar(ctx)
+-            source = ParameterSource.ENVIRONMENT
+-        if value is None:
+-            value = ctx.lookup_default(self.name)
+-            source = ParameterSource.DEFAULT_MAP
+-        if value is not None:
+-            ctx.set_parameter_source(self.name, source)
+-        return value
+-
+-    def type_cast_value(self, ctx, value):
+-        """Given a value this runs it properly through the type system.
+-        This automatically handles things like `nargs` and `multiple` as
+-        well as composite types.
+-        """
+-        if self.type.is_composite:
+-            if self.nargs <= 1:
+-                raise TypeError(
+-                    "Attempted to invoke composite type but nargs has"
+-                    f" been set to {self.nargs}. This is not supported;"
+-                    " nargs needs to be set to a fixed value > 1."
+-                )
+-            if self.multiple:
+-                return tuple(self.type(x or (), self, ctx) for x in value or ())
+-            return self.type(value or (), self, ctx)
+-
+-        def _convert(value, level):
+-            if level == 0:
+-                return self.type(value, self, ctx)
+-            return tuple(_convert(x, level - 1) for x in value or ())
+-
+-        return _convert(value, (self.nargs != 1) + bool(self.multiple))
+-
+-    def process_value(self, ctx, value):
+-        """Given a value and context this runs the logic to convert the
+-        value as necessary.
+-        """
+-        # If the value we were given is None we do nothing.  This way
+-        # code that calls this can easily figure out if something was
+-        # not provided.  Otherwise it would be converted into an empty
+-        # tuple for multiple invocations which is inconvenient.
+-        if value is not None:
+-            return self.type_cast_value(ctx, value)
+-
+-    def value_is_missing(self, value):
+-        if value is None:
+-            return True
+-        if (self.nargs != 1 or self.multiple) and value == ():
+-            return True
+-        return False
+-
+-    def full_process_value(self, ctx, value):
+-        value = self.process_value(ctx, value)
+-
+-        if value is None and not ctx.resilient_parsing:
+-            value = self.get_default(ctx)
+-            if value is not None:
+-                ctx.set_parameter_source(self.name, ParameterSource.DEFAULT)
+-
+-        if self.required and self.value_is_missing(value):
+-            raise MissingParameter(ctx=ctx, param=self)
+-
+-        return value
+-
+-    def resolve_envvar_value(self, ctx):
+-        if self.envvar is None:
+-            return
+-        if isinstance(self.envvar, (tuple, list)):
+-            for envvar in self.envvar:
+-                rv = os.environ.get(envvar)
+-                if rv is not None:
+-                    return rv
+-        else:
+-            rv = os.environ.get(self.envvar)
+-
+-            if rv != "":
+-                return rv
+-
+-    def value_from_envvar(self, ctx):
+-        rv = self.resolve_envvar_value(ctx)
+-        if rv is not None and self.nargs != 1:
+-            rv = self.type.split_envvar_value(rv)
+-        return rv
+-
+-    def handle_parse_result(self, ctx, opts, args):
+-        with augment_usage_errors(ctx, param=self):
+-            value = self.consume_value(ctx, opts)
+-            try:
+-                value = self.full_process_value(ctx, value)
+-            except Exception:
+-                if not ctx.resilient_parsing:
+-                    raise
+-                value = None
+-            if self.callback is not None:
+-                try:
+-                    value = self.callback(ctx, self, value)
+-                except Exception:
+-                    if not ctx.resilient_parsing:
+-                        raise
+-
+-        if self.expose_value:
+-            ctx.params[self.name] = value
+-        return value, args
+-
+-    def get_help_record(self, ctx):
+-        pass
+-
+-    def get_usage_pieces(self, ctx):
+-        return []
+-
+-    def get_error_hint(self, ctx):
+-        """Get a stringified version of the param for use in error messages to
+-        indicate which param caused the error.
+-        """
+-        hint_list = self.opts or [self.human_readable_name]
+-        return " / ".join(repr(x) for x in hint_list)
+-
+-
+-class Option(Parameter):
+-    """Options are usually optional values on the command line and
+-    have some extra features that arguments don't have.
+-
+-    All other parameters are passed onwards to the parameter constructor.
+-
+-    :param show_default: controls if the default value should be shown on the
+-                         help page. Normally, defaults are not shown. If this
+-                         value is a string, it shows the string instead of the
+-                         value. This is particularly useful for dynamic options.
+-    :param show_envvar: controls if an environment variable should be shown on
+-                        the help page.  Normally, environment variables
+-                        are not shown.
+-    :param prompt: if set to `True` or a non empty string then the user will be
+-                   prompted for input.  If set to `True` the prompt will be the
+-                   option name capitalized.
+-    :param confirmation_prompt: if set then the value will need to be confirmed
+-                                if it was prompted for.
+-    :param hide_input: if this is `True` then the input on the prompt will be
+-                       hidden from the user.  This is useful for password
+-                       input.
+-    :param is_flag: forces this option to act as a flag.  The default is
+-                    auto detection.
+-    :param flag_value: which value should be used for this flag if it's
+-                       enabled.  This is set to a boolean automatically if
+-                       the option string contains a slash to mark two options.
+-    :param multiple: if this is set to `True` then the argument is accepted
+-                     multiple times and recorded.  This is similar to ``nargs``
+-                     in how it works but supports arbitrary number of
+-                     arguments.
+-    :param count: this flag makes an option increment an integer.
+-    :param allow_from_autoenv: if this is enabled then the value of this
+-                               parameter will be pulled from an environment
+-                               variable in case a prefix is defined on the
+-                               context.
+-    :param help: the help string.
+-    :param hidden: hide this option from help outputs.
+-    """
+-
+-    param_type_name = "option"
+-
+-    def __init__(
+-        self,
+-        param_decls=None,
+-        show_default=False,
+-        prompt=False,
+-        confirmation_prompt=False,
+-        hide_input=False,
+-        is_flag=None,
+-        flag_value=None,
+-        multiple=False,
+-        count=False,
+-        allow_from_autoenv=True,
+-        type=None,
+-        help=None,
+-        hidden=False,
+-        show_choices=True,
+-        show_envvar=False,
+-        **attrs,
+-    ):
+-        default_is_missing = attrs.get("default", _missing) is _missing
+-        Parameter.__init__(self, param_decls, type=type, **attrs)
+-
+-        if prompt is True:
+-            prompt_text = self.name.replace("_", " ").capitalize()
+-        elif prompt is False:
+-            prompt_text = None
+-        else:
+-            prompt_text = prompt
+-        self.prompt = prompt_text
+-        self.confirmation_prompt = confirmation_prompt
+-        self.hide_input = hide_input
+-        self.hidden = hidden
+-
+-        # Flags
+-        if is_flag is None:
+-            if flag_value is not None:
+-                is_flag = True
+-            else:
+-                is_flag = bool(self.secondary_opts)
+-        if is_flag and default_is_missing:
+-            self.default = False
+-        if flag_value is None:
+-            flag_value = not self.default
+-        self.is_flag = is_flag
+-        self.flag_value = flag_value
+-        if self.is_flag and isinstance(self.flag_value, bool) and type in [None, bool]:
+-            self.type = BOOL
+-            self.is_bool_flag = True
+-        else:
+-            self.is_bool_flag = False
+-
+-        # Counting
+-        self.count = count
+-        if count:
+-            if type is None:
+-                self.type = IntRange(min=0)
+-            if default_is_missing:
+-                self.default = 0
+-
+-        self.multiple = multiple
+-        self.allow_from_autoenv = allow_from_autoenv
+-        self.help = help
+-        self.show_default = show_default
+-        self.show_choices = show_choices
+-        self.show_envvar = show_envvar
+-
+-        # Sanity check for stuff we don't support
+-        if __debug__:
+-            if self.nargs < 0:
+-                raise TypeError("Options cannot have nargs < 0")
+-            if self.prompt and self.is_flag and not self.is_bool_flag:
+-                raise TypeError("Cannot prompt for flags that are not bools.")
+-            if not self.is_bool_flag and self.secondary_opts:
+-                raise TypeError("Got secondary option for non boolean flag.")
+-            if self.is_bool_flag and self.hide_input and self.prompt is not None:
+-                raise TypeError("Hidden input does not work with boolean flag prompts.")
+-            if self.count:
+-                if self.multiple:
+-                    raise TypeError(
+-                        "Options cannot be multiple and count at the same time."
+-                    )
+-                elif self.is_flag:
+-                    raise TypeError(
+-                        "Options cannot be count and flags at the same time."
+-                    )
+-
+-    def _parse_decls(self, decls, expose_value):
+-        opts = []
+-        secondary_opts = []
+-        name = None
+-        possible_names = []
+-
+-        for decl in decls:
+-            if decl.isidentifier():
+-                if name is not None:
+-                    raise TypeError("Name defined twice")
+-                name = decl
+-            else:
+-                split_char = ";" if decl[:1] == "/" else "/"
+-                if split_char in decl:
+-                    first, second = decl.split(split_char, 1)
+-                    first = first.rstrip()
+-                    if first:
+-                        possible_names.append(split_opt(first))
+-                        opts.append(first)
+-                    second = second.lstrip()
+-                    if second:
+-                        secondary_opts.append(second.lstrip())
+-                else:
+-                    possible_names.append(split_opt(decl))
+-                    opts.append(decl)
+-
+-        if name is None and possible_names:
+-            possible_names.sort(key=lambda x: -len(x[0]))  # group long options first
+-            name = possible_names[0][1].replace("-", "_").lower()
+-            if not name.isidentifier():
+-                name = None
+-
+-        if name is None:
+-            if not expose_value:
+-                return None, opts, secondary_opts
+-            raise TypeError("Could not determine name for option")
+-
+-        if not opts and not secondary_opts:
+-            raise TypeError(
+-                f"No options defined but a name was passed ({name})."
+-                " Did you mean to declare an argument instead of an"
+-                " option?"
+-            )
+-
+-        return name, opts, secondary_opts
+-
+-    def add_to_parser(self, parser, ctx):
+-        kwargs = {
+-            "dest": self.name,
+-            "nargs": self.nargs,
+-            "obj": self,
+-        }
+-
+-        if self.multiple:
+-            action = "append"
+-        elif self.count:
+-            action = "count"
+-        else:
+-            action = "store"
+-
+-        if self.is_flag:
+-            kwargs.pop("nargs", None)
+-            action_const = f"{action}_const"
+-            if self.is_bool_flag and self.secondary_opts:
+-                parser.add_option(self.opts, action=action_const, const=True, **kwargs)
+-                parser.add_option(
+-                    self.secondary_opts, action=action_const, const=False, **kwargs
+-                )
+-            else:
+-                parser.add_option(
+-                    self.opts, action=action_const, const=self.flag_value, **kwargs
+-                )
+-        else:
+-            kwargs["action"] = action
+-            parser.add_option(self.opts, **kwargs)
+-
+-    def get_help_record(self, ctx):
+-        if self.hidden:
+-            return
+-        any_prefix_is_slash = []
+-
+-        def _write_opts(opts):
+-            rv, any_slashes = join_options(opts)
+-            if any_slashes:
+-                any_prefix_is_slash[:] = [True]
+-            if not self.is_flag and not self.count:
+-                rv += f" {self.make_metavar()}"
+-            return rv
+-
+-        rv = [_write_opts(self.opts)]
+-        if self.secondary_opts:
+-            rv.append(_write_opts(self.secondary_opts))
+-
+-        help = self.help or ""
+-        extra = []
+-        if self.show_envvar:
+-            envvar = self.envvar
+-            if envvar is None:
+-                if self.allow_from_autoenv and ctx.auto_envvar_prefix is not None:
+-                    envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}"
+-            if envvar is not None:
+-                var_str = (
+-                    ", ".join(str(d) for d in envvar)
+-                    if isinstance(envvar, (list, tuple))
+-                    else envvar
+-                )
+-                extra.append(f"env var: {var_str}")
+-        if self.default is not None and (self.show_default or ctx.show_default):
+-            if isinstance(self.show_default, str):
+-                default_string = f"({self.show_default})"
+-            elif isinstance(self.default, (list, tuple)):
+-                default_string = ", ".join(str(d) for d in self.default)
+-            elif inspect.isfunction(self.default):
+-                default_string = "(dynamic)"
+-            else:
+-                default_string = self.default
+-            extra.append(f"default: {default_string}")
+-
+-        if self.required:
+-            extra.append("required")
+-        if extra:
+-            extra_str = ";".join(extra)
+-            help = f"{help}  [{extra_str}]" if help else f"[{extra_str}]"
+-
+-        return ("; " if any_prefix_is_slash else " / ").join(rv), help
+-
+-    def get_default(self, ctx):
+-        # If we're a non boolean flag our default is more complex because
+-        # we need to look at all flags in the same group to figure out
+-        # if we're the the default one in which case we return the flag
+-        # value as default.
+-        if self.is_flag and not self.is_bool_flag:
+-            for param in ctx.command.params:
+-                if param.name == self.name and param.default:
+-                    return param.flag_value
+-            return None
+-        return Parameter.get_default(self, ctx)
+-
+-    def prompt_for_value(self, ctx):
+-        """This is an alternative flow that can be activated in the full
+-        value processing if a value does not exist.  It will prompt the
+-        user until a valid value exists and then returns the processed
+-        value as result.
+-        """
+-        # Calculate the default before prompting anything to be stable.
+-        default = self.get_default(ctx)
+-
+-        # If this is a prompt for a flag we need to handle this
+-        # differently.
+-        if self.is_bool_flag:
+-            return confirm(self.prompt, default)
+-
+-        return prompt(
+-            self.prompt,
+-            default=default,
+-            type=self.type,
+-            hide_input=self.hide_input,
+-            show_choices=self.show_choices,
+-            confirmation_prompt=self.confirmation_prompt,
+-            value_proc=lambda x: self.process_value(ctx, x),
+-        )
+-
+-    def resolve_envvar_value(self, ctx):
+-        rv = Parameter.resolve_envvar_value(self, ctx)
+-        if rv is not None:
+-            return rv
+-        if self.allow_from_autoenv and ctx.auto_envvar_prefix is not None:
+-            envvar = f"{ctx.auto_envvar_prefix}_{self.name.upper()}"
+-            return os.environ.get(envvar)
+-
+-    def value_from_envvar(self, ctx):
+-        rv = self.resolve_envvar_value(ctx)
+-        if rv is None:
+-            return None
+-        value_depth = (self.nargs != 1) + bool(self.multiple)
+-        if value_depth > 0 and rv is not None:
+-            rv = self.type.split_envvar_value(rv)
+-            if self.multiple and self.nargs != 1:
+-                rv = batch(rv, self.nargs)
+-        return rv
+-
+-    def full_process_value(self, ctx, value):
+-        if value is None and self.prompt is not None and not ctx.resilient_parsing:
+-            return self.prompt_for_value(ctx)
+-        return Parameter.full_process_value(self, ctx, value)
+-
+-
+-class Argument(Parameter):
+-    """Arguments are positional parameters to a command.  They generally
+-    provide fewer features than options but can have infinite ``nargs``
+-    and are required by default.
+-
+-    All parameters are passed onwards to the parameter constructor.
+-    """
+-
+-    param_type_name = "argument"
+-
+-    def __init__(self, param_decls, required=None, **attrs):
+-        if required is None:
+-            if attrs.get("default") is not None:
+-                required = False
+-            else:
+-                required = attrs.get("nargs", 1) > 0
+-        Parameter.__init__(self, param_decls, required=required, **attrs)
+-        if self.default is not None and self.nargs < 0:
+-            raise TypeError(
+-                "nargs=-1 in combination with a default value is not supported."
+-            )
+-
+-    @property
+-    def human_readable_name(self):
+-        if self.metavar is not None:
+-            return self.metavar
+-        return self.name.upper()
+-
+-    def make_metavar(self):
+-        if self.metavar is not None:
+-            return self.metavar
+-        var = self.type.get_metavar(self)
+-        if not var:
+-            var = self.name.upper()
+-        if not self.required:
+-            var = f"[{var}]"
+-        if self.nargs != 1:
+-            var += "..."
+-        return var
+-
+-    def _parse_decls(self, decls, expose_value):
+-        if not decls:
+-            if not expose_value:
+-                return None, [], []
+-            raise TypeError("Could not determine name for argument")
+-        if len(decls) == 1:
+-            name = arg = decls[0]
+-            name = name.replace("-", "_").lower()
+-        else:
+-            raise TypeError(
+-                "Arguments take exactly one parameter declaration, got"
+-                f" {len(decls)}."
+-            )
+-        return name, [arg], []
+-
+-    def get_usage_pieces(self, ctx):
+-        return [self.make_metavar()]
+-
+-    def get_error_hint(self, ctx):
+-        return repr(self.make_metavar())
+-
+-    def add_to_parser(self, parser, ctx):
+-        parser.add_argument(dest=self.name, nargs=self.nargs, obj=self)
+diff --git a/dynaconf/vendor_src/click/decorators.py b/dynaconf/vendor_src/click/decorators.py
+deleted file mode 100644
+index 3013305..0000000
+--- a/dynaconf/vendor_src/click/decorators.py
++++ /dev/null
+@@ -1,331 +0,0 @@
+-import inspect
+-import sys
+-from functools import update_wrapper
+-
+-from .core import Argument
+-from .core import Command
+-from .core import Group
+-from .core import Option
+-from .globals import get_current_context
+-from .utils import echo
+-
+-
+-def pass_context(f):
+-    """Marks a callback as wanting to receive the current context
+-    object as first argument.
+-    """
+-
+-    def new_func(*args, **kwargs):
+-        return f(get_current_context(), *args, **kwargs)
+-
+-    return update_wrapper(new_func, f)
+-
+-
+-def pass_obj(f):
+-    """Similar to :func:`pass_context`, but only pass the object on the
+-    context onwards (:attr:`Context.obj`).  This is useful if that object
+-    represents the state of a nested system.
+-    """
+-
+-    def new_func(*args, **kwargs):
+-        return f(get_current_context().obj, *args, **kwargs)
+-
+-    return update_wrapper(new_func, f)
+-
+-
+-def make_pass_decorator(object_type, ensure=False):
+-    """Given an object type this creates a decorator that will work
+-    similar to :func:`pass_obj` but instead of passing the object of the
+-    current context, it will find the innermost context of type
+-    :func:`object_type`.
+-
+-    This generates a decorator that works roughly like this::
+-
+-        from functools import update_wrapper
+-
+-        def decorator(f):
+-            @pass_context
+-            def new_func(ctx, *args, **kwargs):
+-                obj = ctx.find_object(object_type)
+-                return ctx.invoke(f, obj, *args, **kwargs)
+-            return update_wrapper(new_func, f)
+-        return decorator
+-
+-    :param object_type: the type of the object to pass.
+-    :param ensure: if set to `True`, a new object will be created and
+-                   remembered on the context if it's not there yet.
+-    """
+-
+-    def decorator(f):
+-        def new_func(*args, **kwargs):
+-            ctx = get_current_context()
+-            if ensure:
+-                obj = ctx.ensure_object(object_type)
+-            else:
+-                obj = ctx.find_object(object_type)
+-            if obj is None:
+-                raise RuntimeError(
+-                    "Managed to invoke callback without a context"
+-                    f" object of type {object_type.__name__!r}"
+-                    " existing."
+-                )
+-            return ctx.invoke(f, obj, *args, **kwargs)
+-
+-        return update_wrapper(new_func, f)
+-
+-    return decorator
+-
+-
+-def _make_command(f, name, attrs, cls):
+-    if isinstance(f, Command):
+-        raise TypeError("Attempted to convert a callback into a command twice.")
+-    try:
+-        params = f.__click_params__
+-        params.reverse()
+-        del f.__click_params__
+-    except AttributeError:
+-        params = []
+-    help = attrs.get("help")
+-    if help is None:
+-        help = inspect.getdoc(f)
+-        if isinstance(help, bytes):
+-            help = help.decode("utf-8")
+-    else:
+-        help = inspect.cleandoc(help)
+-    attrs["help"] = help
+-    return cls(
+-        name=name or f.__name__.lower().replace("_", "-"),
+-        callback=f,
+-        params=params,
+-        **attrs,
+-    )
+-
+-
+-def command(name=None, cls=None, **attrs):
+-    r"""Creates a new :class:`Command` and uses the decorated function as
+-    callback.  This will also automatically attach all decorated
+-    :func:`option`\s and :func:`argument`\s as parameters to the command.
+-
+-    The name of the command defaults to the name of the function with
+-    underscores replaced by dashes.  If you want to change that, you can
+-    pass the intended name as the first argument.
+-
+-    All keyword arguments are forwarded to the underlying command class.
+-
+-    Once decorated the function turns into a :class:`Command` instance
+-    that can be invoked as a command line utility or be attached to a
+-    command :class:`Group`.
+-
+-    :param name: the name of the command.  This defaults to the function
+-                 name with underscores replaced by dashes.
+-    :param cls: the command class to instantiate.  This defaults to
+-                :class:`Command`.
+-    """
+-    if cls is None:
+-        cls = Command
+-
+-    def decorator(f):
+-        cmd = _make_command(f, name, attrs, cls)
+-        cmd.__doc__ = f.__doc__
+-        return cmd
+-
+-    return decorator
+-
+-
+-def group(name=None, **attrs):
+-    """Creates a new :class:`Group` with a function as callback.  This
+-    works otherwise the same as :func:`command` just that the `cls`
+-    parameter is set to :class:`Group`.
+-    """
+-    attrs.setdefault("cls", Group)
+-    return command(name, **attrs)
+-
+-
+-def _param_memo(f, param):
+-    if isinstance(f, Command):
+-        f.params.append(param)
+-    else:
+-        if not hasattr(f, "__click_params__"):
+-            f.__click_params__ = []
+-        f.__click_params__.append(param)
+-
+-
+-def argument(*param_decls, **attrs):
+-    """Attaches an argument to the command.  All positional arguments are
+-    passed as parameter declarations to :class:`Argument`; all keyword
+-    arguments are forwarded unchanged (except ``cls``).
+-    This is equivalent to creating an :class:`Argument` instance manually
+-    and attaching it to the :attr:`Command.params` list.
+-
+-    :param cls: the argument class to instantiate.  This defaults to
+-                :class:`Argument`.
+-    """
+-
+-    def decorator(f):
+-        ArgumentClass = attrs.pop("cls", Argument)
+-        _param_memo(f, ArgumentClass(param_decls, **attrs))
+-        return f
+-
+-    return decorator
+-
+-
+-def option(*param_decls, **attrs):
+-    """Attaches an option to the command.  All positional arguments are
+-    passed as parameter declarations to :class:`Option`; all keyword
+-    arguments are forwarded unchanged (except ``cls``).
+-    This is equivalent to creating an :class:`Option` instance manually
+-    and attaching it to the :attr:`Command.params` list.
+-
+-    :param cls: the option class to instantiate.  This defaults to
+-                :class:`Option`.
+-    """
+-
+-    def decorator(f):
+-        # Issue 926, copy attrs, so pre-defined options can re-use the same cls=
+-        option_attrs = attrs.copy()
+-
+-        if "help" in option_attrs:
+-            option_attrs["help"] = inspect.cleandoc(option_attrs["help"])
+-        OptionClass = option_attrs.pop("cls", Option)
+-        _param_memo(f, OptionClass(param_decls, **option_attrs))
+-        return f
+-
+-    return decorator
+-
+-
+-def confirmation_option(*param_decls, **attrs):
+-    """Shortcut for confirmation prompts that can be ignored by passing
+-    ``--yes`` as parameter.
+-
+-    This is equivalent to decorating a function with :func:`option` with
+-    the following parameters::
+-
+-        def callback(ctx, param, value):
+-            if not value:
+-                ctx.abort()
+-
+-        @click.command()
+-        @click.option('--yes', is_flag=True, callback=callback,
+-                      expose_value=False, prompt='Do you want to continue?')
+-        def dropdb():
+-            pass
+-    """
+-
+-    def decorator(f):
+-        def callback(ctx, param, value):
+-            if not value:
+-                ctx.abort()
+-
+-        attrs.setdefault("is_flag", True)
+-        attrs.setdefault("callback", callback)
+-        attrs.setdefault("expose_value", False)
+-        attrs.setdefault("prompt", "Do you want to continue?")
+-        attrs.setdefault("help", "Confirm the action without prompting.")
+-        return option(*(param_decls or ("--yes",)), **attrs)(f)
+-
+-    return decorator
+-
+-
+-def password_option(*param_decls, **attrs):
+-    """Shortcut for password prompts.
+-
+-    This is equivalent to decorating a function with :func:`option` with
+-    the following parameters::
+-
+-        @click.command()
+-        @click.option('--password', prompt=True, confirmation_prompt=True,
+-                      hide_input=True)
+-        def changeadmin(password):
+-            pass
+-    """
+-
+-    def decorator(f):
+-        attrs.setdefault("prompt", True)
+-        attrs.setdefault("confirmation_prompt", True)
+-        attrs.setdefault("hide_input", True)
+-        return option(*(param_decls or ("--password",)), **attrs)(f)
+-
+-    return decorator
+-
+-
+-def version_option(version=None, *param_decls, **attrs):
+-    """Adds a ``--version`` option which immediately ends the program
+-    printing out the version number.  This is implemented as an eager
+-    option that prints the version and exits the program in the callback.
+-
+-    :param version: the version number to show.  If not provided Click
+-                    attempts an auto discovery via setuptools.
+-    :param prog_name: the name of the program (defaults to autodetection)
+-    :param message: custom message to show instead of the default
+-                    (``'%(prog)s, version %(version)s'``)
+-    :param others: everything else is forwarded to :func:`option`.
+-    """
+-    if version is None:
+-        if hasattr(sys, "_getframe"):
+-            module = sys._getframe(1).f_globals.get("__name__")
+-        else:
+-            module = ""
+-
+-    def decorator(f):
+-        prog_name = attrs.pop("prog_name", None)
+-        message = attrs.pop("message", "%(prog)s, version %(version)s")
+-
+-        def callback(ctx, param, value):
+-            if not value or ctx.resilient_parsing:
+-                return
+-            prog = prog_name
+-            if prog is None:
+-                prog = ctx.find_root().info_name
+-            ver = version
+-            if ver is None:
+-                try:
+-                    import pkg_resources
+-                except ImportError:
+-                    pass
+-                else:
+-                    for dist in pkg_resources.working_set:
+-                        scripts = dist.get_entry_map().get("console_scripts") or {}
+-                        for entry_point in scripts.values():
+-                            if entry_point.module_name == module:
+-                                ver = dist.version
+-                                break
+-                if ver is None:
+-                    raise RuntimeError("Could not determine version")
+-            echo(message % {"prog": prog, "version": ver}, color=ctx.color)
+-            ctx.exit()
+-
+-        attrs.setdefault("is_flag", True)
+-        attrs.setdefault("expose_value", False)
+-        attrs.setdefault("is_eager", True)
+-        attrs.setdefault("help", "Show the version and exit.")
+-        attrs["callback"] = callback
+-        return option(*(param_decls or ("--version",)), **attrs)(f)
+-
+-    return decorator
+-
+-
+-def help_option(*param_decls, **attrs):
+-    """Adds a ``--help`` option which immediately ends the program
+-    printing out the help page.  This is usually unnecessary to add as
+-    this is added by default to all commands unless suppressed.
+-
+-    Like :func:`version_option`, this is implemented as eager option that
+-    prints in the callback and exits.
+-
+-    All arguments are forwarded to :func:`option`.
+-    """
+-
+-    def decorator(f):
+-        def callback(ctx, param, value):
+-            if value and not ctx.resilient_parsing:
+-                echo(ctx.get_help(), color=ctx.color)
+-                ctx.exit()
+-
+-        attrs.setdefault("is_flag", True)
+-        attrs.setdefault("expose_value", False)
+-        attrs.setdefault("help", "Show this message and exit.")
+-        attrs.setdefault("is_eager", True)
+-        attrs["callback"] = callback
+-        return option(*(param_decls or ("--help",)), **attrs)(f)
+-
+-    return decorator
+diff --git a/dynaconf/vendor_src/click/exceptions.py b/dynaconf/vendor_src/click/exceptions.py
+deleted file mode 100644
+index 25b02bb..0000000
+--- a/dynaconf/vendor_src/click/exceptions.py
++++ /dev/null
+@@ -1,233 +0,0 @@
+-from ._compat import filename_to_ui
+-from ._compat import get_text_stderr
+-from .utils import echo
+-
+-
+-def _join_param_hints(param_hint):
+-    if isinstance(param_hint, (tuple, list)):
+-        return " / ".join(repr(x) for x in param_hint)
+-    return param_hint
+-
+-
+-class ClickException(Exception):
+-    """An exception that Click can handle and show to the user."""
+-
+-    #: The exit code for this exception.
+-    exit_code = 1
+-
+-    def __init__(self, message):
+-        super().__init__(message)
+-        self.message = message
+-
+-    def format_message(self):
+-        return self.message
+-
+-    def __str__(self):
+-        return self.message
+-
+-    def show(self, file=None):
+-        if file is None:
+-            file = get_text_stderr()
+-        echo(f"Error: {self.format_message()}", file=file)
+-
+-
+-class UsageError(ClickException):
+-    """An internal exception that signals a usage error.  This typically
+-    aborts any further handling.
+-
+-    :param message: the error message to display.
+-    :param ctx: optionally the context that caused this error.  Click will
+-                fill in the context automatically in some situations.
+-    """
+-
+-    exit_code = 2
+-
+-    def __init__(self, message, ctx=None):
+-        ClickException.__init__(self, message)
+-        self.ctx = ctx
+-        self.cmd = self.ctx.command if self.ctx else None
+-
+-    def show(self, file=None):
+-        if file is None:
+-            file = get_text_stderr()
+-        color = None
+-        hint = ""
+-        if self.cmd is not None and self.cmd.get_help_option(self.ctx) is not None:
+-            hint = (
+-                f"Try '{self.ctx.command_path}"
+-                f" {self.ctx.help_option_names[0]}' for help.\n"
+-            )
+-        if self.ctx is not None:
+-            color = self.ctx.color
+-            echo(f"{self.ctx.get_usage()}\n{hint}", file=file, color=color)
+-        echo(f"Error: {self.format_message()}", file=file, color=color)
+-
+-
+-class BadParameter(UsageError):
+-    """An exception that formats out a standardized error message for a
+-    bad parameter.  This is useful when thrown from a callback or type as
+-    Click will attach contextual information to it (for instance, which
+-    parameter it is).
+-
+-    .. versionadded:: 2.0
+-
+-    :param param: the parameter object that caused this error.  This can
+-                  be left out, and Click will attach this info itself
+-                  if possible.
+-    :param param_hint: a string that shows up as parameter name.  This
+-                       can be used as alternative to `param` in cases
+-                       where custom validation should happen.  If it is
+-                       a string it's used as such, if it's a list then
+-                       each item is quoted and separated.
+-    """
+-
+-    def __init__(self, message, ctx=None, param=None, param_hint=None):
+-        UsageError.__init__(self, message, ctx)
+-        self.param = param
+-        self.param_hint = param_hint
+-
+-    def format_message(self):
+-        if self.param_hint is not None:
+-            param_hint = self.param_hint
+-        elif self.param is not None:
+-            param_hint = self.param.get_error_hint(self.ctx)
+-        else:
+-            return f"Invalid value: {self.message}"
+-        param_hint = _join_param_hints(param_hint)
+-
+-        return f"Invalid value for {param_hint}: {self.message}"
+-
+-
+-class MissingParameter(BadParameter):
+-    """Raised if click required an option or argument but it was not
+-    provided when invoking the script.
+-
+-    .. versionadded:: 4.0
+-
+-    :param param_type: a string that indicates the type of the parameter.
+-                       The default is to inherit the parameter type from
+-                       the given `param`.  Valid values are ``'parameter'``,
+-                       ``'option'`` or ``'argument'``.
+-    """
+-
+-    def __init__(
+-        self, message=None, ctx=None, param=None, param_hint=None, param_type=None
+-    ):
+-        BadParameter.__init__(self, message, ctx, param, param_hint)
+-        self.param_type = param_type
+-
+-    def format_message(self):
+-        if self.param_hint is not None:
+-            param_hint = self.param_hint
+-        elif self.param is not None:
+-            param_hint = self.param.get_error_hint(self.ctx)
+-        else:
+-            param_hint = None
+-        param_hint = _join_param_hints(param_hint)
+-
+-        param_type = self.param_type
+-        if param_type is None and self.param is not None:
+-            param_type = self.param.param_type_name
+-
+-        msg = self.message
+-        if self.param is not None:
+-            msg_extra = self.param.type.get_missing_message(self.param)
+-            if msg_extra:
+-                if msg:
+-                    msg += f".  {msg_extra}"
+-                else:
+-                    msg = msg_extra
+-
+-        hint_str = f" {param_hint}" if param_hint else ""
+-        return f"Missing {param_type}{hint_str}.{' ' if msg else ''}{msg or ''}"
+-
+-    def __str__(self):
+-        if self.message is None:
+-            param_name = self.param.name if self.param else None
+-            return f"missing parameter: {param_name}"
+-        else:
+-            return self.message
+-
+-
+-class NoSuchOption(UsageError):
+-    """Raised if click attempted to handle an option that does not
+-    exist.
+-
+-    .. versionadded:: 4.0
+-    """
+-
+-    def __init__(self, option_name, message=None, possibilities=None, ctx=None):
+-        if message is None:
+-            message = f"no such option: {option_name}"
+-        UsageError.__init__(self, message, ctx)
+-        self.option_name = option_name
+-        self.possibilities = possibilities
+-
+-    def format_message(self):
+-        bits = [self.message]
+-        if self.possibilities:
+-            if len(self.possibilities) == 1:
+-                bits.append(f"Did you mean {self.possibilities[0]}?")
+-            else:
+-                possibilities = sorted(self.possibilities)
+-                bits.append(f"(Possible options: {', '.join(possibilities)})")
+-        return "  ".join(bits)
+-
+-
+-class BadOptionUsage(UsageError):
+-    """Raised if an option is generally supplied but the use of the option
+-    was incorrect.  This is for instance raised if the number of arguments
+-    for an option is not correct.
+-
+-    .. versionadded:: 4.0
+-
+-    :param option_name: the name of the option being used incorrectly.
+-    """
+-
+-    def __init__(self, option_name, message, ctx=None):
+-        UsageError.__init__(self, message, ctx)
+-        self.option_name = option_name
+-
+-
+-class BadArgumentUsage(UsageError):
+-    """Raised if an argument is generally supplied but the use of the argument
+-    was incorrect.  This is for instance raised if the number of values
+-    for an argument is not correct.
+-
+-    .. versionadded:: 6.0
+-    """
+-
+-    def __init__(self, message, ctx=None):
+-        UsageError.__init__(self, message, ctx)
+-
+-
+-class FileError(ClickException):
+-    """Raised if a file cannot be opened."""
+-
+-    def __init__(self, filename, hint=None):
+-        ui_filename = filename_to_ui(filename)
+-        if hint is None:
+-            hint = "unknown error"
+-        ClickException.__init__(self, hint)
+-        self.ui_filename = ui_filename
+-        self.filename = filename
+-
+-    def format_message(self):
+-        return f"Could not open file {self.ui_filename}: {self.message}"
+-
+-
+-class Abort(RuntimeError):
+-    """An internal signalling exception that signals Click to abort."""
+-
+-
+-class Exit(RuntimeError):
+-    """An exception that indicates that the application should exit with some
+-    status code.
+-
+-    :param code: the status code to exit with.
+-    """
+-
+-    __slots__ = ("exit_code",)
+-
+-    def __init__(self, code=0):
+-        self.exit_code = code
+diff --git a/dynaconf/vendor_src/click/formatting.py b/dynaconf/vendor_src/click/formatting.py
+deleted file mode 100644
+index a298c2e..0000000
+--- a/dynaconf/vendor_src/click/formatting.py
++++ /dev/null
+@@ -1,279 +0,0 @@
+-from contextlib import contextmanager
+-
+-from ._compat import term_len
+-from .parser import split_opt
+-from .termui import get_terminal_size
+-
+-# Can force a width.  This is used by the test system
+-FORCED_WIDTH = None
+-
+-
+-def measure_table(rows):
+-    widths = {}
+-    for row in rows:
+-        for idx, col in enumerate(row):
+-            widths[idx] = max(widths.get(idx, 0), term_len(col))
+-    return tuple(y for x, y in sorted(widths.items()))
+-
+-
+-def iter_rows(rows, col_count):
+-    for row in rows:
+-        row = tuple(row)
+-        yield row + ("",) * (col_count - len(row))
+-
+-
+-def wrap_text(
+-    text, width=78, initial_indent="", subsequent_indent="", preserve_paragraphs=False
+-):
+-    """A helper function that intelligently wraps text.  By default, it
+-    assumes that it operates on a single paragraph of text but if the
+-    `preserve_paragraphs` parameter is provided it will intelligently
+-    handle paragraphs (defined by two empty lines).
+-
+-    If paragraphs are handled, a paragraph can be prefixed with an empty
+-    line containing the ``\\b`` character (``\\x08``) to indicate that
+-    no rewrapping should happen in that block.
+-
+-    :param text: the text that should be rewrapped.
+-    :param width: the maximum width for the text.
+-    :param initial_indent: the initial indent that should be placed on the
+-                           first line as a string.
+-    :param subsequent_indent: the indent string that should be placed on
+-                              each consecutive line.
+-    :param preserve_paragraphs: if this flag is set then the wrapping will
+-                                intelligently handle paragraphs.
+-    """
+-    from ._textwrap import TextWrapper
+-
+-    text = text.expandtabs()
+-    wrapper = TextWrapper(
+-        width,
+-        initial_indent=initial_indent,
+-        subsequent_indent=subsequent_indent,
+-        replace_whitespace=False,
+-    )
+-    if not preserve_paragraphs:
+-        return wrapper.fill(text)
+-
+-    p = []
+-    buf = []
+-    indent = None
+-
+-    def _flush_par():
+-        if not buf:
+-            return
+-        if buf[0].strip() == "\b":
+-            p.append((indent or 0, True, "\n".join(buf[1:])))
+-        else:
+-            p.append((indent or 0, False, " ".join(buf)))
+-        del buf[:]
+-
+-    for line in text.splitlines():
+-        if not line:
+-            _flush_par()
+-            indent = None
+-        else:
+-            if indent is None:
+-                orig_len = term_len(line)
+-                line = line.lstrip()
+-                indent = orig_len - term_len(line)
+-            buf.append(line)
+-    _flush_par()
+-
+-    rv = []
+-    for indent, raw, text in p:
+-        with wrapper.extra_indent(" " * indent):
+-            if raw:
+-                rv.append(wrapper.indent_only(text))
+-            else:
+-                rv.append(wrapper.fill(text))
+-
+-    return "\n\n".join(rv)
+-
+-
+-class HelpFormatter:
+-    """This class helps with formatting text-based help pages.  It's
+-    usually just needed for very special internal cases, but it's also
+-    exposed so that developers can write their own fancy outputs.
+-
+-    At present, it always writes into memory.
+-
+-    :param indent_increment: the additional increment for each level.
+-    :param width: the width for the text.  This defaults to the terminal
+-                  width clamped to a maximum of 78.
+-    """
+-
+-    def __init__(self, indent_increment=2, width=None, max_width=None):
+-        self.indent_increment = indent_increment
+-        if max_width is None:
+-            max_width = 80
+-        if width is None:
+-            width = FORCED_WIDTH
+-            if width is None:
+-                width = max(min(get_terminal_size()[0], max_width) - 2, 50)
+-        self.width = width
+-        self.current_indent = 0
+-        self.buffer = []
+-
+-    def write(self, string):
+-        """Writes a unicode string into the internal buffer."""
+-        self.buffer.append(string)
+-
+-    def indent(self):
+-        """Increases the indentation."""
+-        self.current_indent += self.indent_increment
+-
+-    def dedent(self):
+-        """Decreases the indentation."""
+-        self.current_indent -= self.indent_increment
+-
+-    def write_usage(self, prog, args="", prefix="Usage: "):
+-        """Writes a usage line into the buffer.
+-
+-        :param prog: the program name.
+-        :param args: whitespace separated list of arguments.
+-        :param prefix: the prefix for the first line.
+-        """
+-        usage_prefix = f"{prefix:>{self.current_indent}}{prog} "
+-        text_width = self.width - self.current_indent
+-
+-        if text_width >= (term_len(usage_prefix) + 20):
+-            # The arguments will fit to the right of the prefix.
+-            indent = " " * term_len(usage_prefix)
+-            self.write(
+-                wrap_text(
+-                    args,
+-                    text_width,
+-                    initial_indent=usage_prefix,
+-                    subsequent_indent=indent,
+-                )
+-            )
+-        else:
+-            # The prefix is too long, put the arguments on the next line.
+-            self.write(usage_prefix)
+-            self.write("\n")
+-            indent = " " * (max(self.current_indent, term_len(prefix)) + 4)
+-            self.write(
+-                wrap_text(
+-                    args, text_width, initial_indent=indent, subsequent_indent=indent
+-                )
+-            )
+-
+-        self.write("\n")
+-
+-    def write_heading(self, heading):
+-        """Writes a heading into the buffer."""
+-        self.write(f"{'':>{self.current_indent}}{heading}:\n")
+-
+-    def write_paragraph(self):
+-        """Writes a paragraph into the buffer."""
+-        if self.buffer:
+-            self.write("\n")
+-
+-    def write_text(self, text):
+-        """Writes re-indented text into the buffer.  This rewraps and
+-        preserves paragraphs.
+-        """
+-        text_width = max(self.width - self.current_indent, 11)
+-        indent = " " * self.current_indent
+-        self.write(
+-            wrap_text(
+-                text,
+-                text_width,
+-                initial_indent=indent,
+-                subsequent_indent=indent,
+-                preserve_paragraphs=True,
+-            )
+-        )
+-        self.write("\n")
+-
+-    def write_dl(self, rows, col_max=30, col_spacing=2):
+-        """Writes a definition list into the buffer.  This is how options
+-        and commands are usually formatted.
+-
+-        :param rows: a list of two item tuples for the terms and values.
+-        :param col_max: the maximum width of the first column.
+-        :param col_spacing: the number of spaces between the first and
+-                            second column.
+-        """
+-        rows = list(rows)
+-        widths = measure_table(rows)
+-        if len(widths) != 2:
+-            raise TypeError("Expected two columns for definition list")
+-
+-        first_col = min(widths[0], col_max) + col_spacing
+-
+-        for first, second in iter_rows(rows, len(widths)):
+-            self.write(f"{'':>{self.current_indent}}{first}")
+-            if not second:
+-                self.write("\n")
+-                continue
+-            if term_len(first) <= first_col - col_spacing:
+-                self.write(" " * (first_col - term_len(first)))
+-            else:
+-                self.write("\n")
+-                self.write(" " * (first_col + self.current_indent))
+-
+-            text_width = max(self.width - first_col - 2, 10)
+-            wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True)
+-            lines = wrapped_text.splitlines()
+-
+-            if lines:
+-                self.write(f"{lines[0]}\n")
+-
+-                for line in lines[1:]:
+-                    self.write(f"{'':>{first_col + self.current_indent}}{line}\n")
+-
+-                if len(lines) > 1:
+-                    # separate long help from next option
+-                    self.write("\n")
+-            else:
+-                self.write("\n")
+-
+-    @contextmanager
+-    def section(self, name):
+-        """Helpful context manager that writes a paragraph, a heading,
+-        and the indents.
+-
+-        :param name: the section name that is written as heading.
+-        """
+-        self.write_paragraph()
+-        self.write_heading(name)
+-        self.indent()
+-        try:
+-            yield
+-        finally:
+-            self.dedent()
+-
+-    @contextmanager
+-    def indentation(self):
+-        """A context manager that increases the indentation."""
+-        self.indent()
+-        try:
+-            yield
+-        finally:
+-            self.dedent()
+-
+-    def getvalue(self):
+-        """Returns the buffer contents."""
+-        return "".join(self.buffer)
+-
+-
+-def join_options(options):
+-    """Given a list of option strings this joins them in the most appropriate
+-    way and returns them in the form ``(formatted_string,
+-    any_prefix_is_slash)`` where the second item in the tuple is a flag that
+-    indicates if any of the option prefixes was a slash.
+-    """
+-    rv = []
+-    any_prefix_is_slash = False
+-    for opt in options:
+-        prefix = split_opt(opt)[0]
+-        if prefix == "/":
+-            any_prefix_is_slash = True
+-        rv.append((len(prefix), opt))
+-
+-    rv.sort(key=lambda x: x[0])
+-
+-    rv = ", ".join(x[1] for x in rv)
+-    return rv, any_prefix_is_slash
+diff --git a/dynaconf/vendor_src/click/globals.py b/dynaconf/vendor_src/click/globals.py
+deleted file mode 100644
+index 1649f9a..0000000
+--- a/dynaconf/vendor_src/click/globals.py
++++ /dev/null
+@@ -1,47 +0,0 @@
+-from threading import local
+-
+-_local = local()
+-
+-
+-def get_current_context(silent=False):
+-    """Returns the current click context.  This can be used as a way to
+-    access the current context object from anywhere.  This is a more implicit
+-    alternative to the :func:`pass_context` decorator.  This function is
+-    primarily useful for helpers such as :func:`echo` which might be
+-    interested in changing its behavior based on the current context.
+-
+-    To push the current context, :meth:`Context.scope` can be used.
+-
+-    .. versionadded:: 5.0
+-
+-    :param silent: if set to `True` the return value is `None` if no context
+-                   is available.  The default behavior is to raise a
+-                   :exc:`RuntimeError`.
+-    """
+-    try:
+-        return _local.stack[-1]
+-    except (AttributeError, IndexError):
+-        if not silent:
+-            raise RuntimeError("There is no active click context.")
+-
+-
+-def push_context(ctx):
+-    """Pushes a new context to the current stack."""
+-    _local.__dict__.setdefault("stack", []).append(ctx)
+-
+-
+-def pop_context():
+-    """Removes the top level from the stack."""
+-    _local.stack.pop()
+-
+-
+-def resolve_color_default(color=None):
+-    """"Internal helper to get the default value of the color flag.  If a
+-    value is passed it's returned unchanged, otherwise it's looked up from
+-    the current context.
+-    """
+-    if color is not None:
+-        return color
+-    ctx = get_current_context(silent=True)
+-    if ctx is not None:
+-        return ctx.color
+diff --git a/dynaconf/vendor_src/click/parser.py b/dynaconf/vendor_src/click/parser.py
+deleted file mode 100644
+index 158abb0..0000000
+--- a/dynaconf/vendor_src/click/parser.py
++++ /dev/null
+@@ -1,431 +0,0 @@
+-"""
+-This module started out as largely a copy paste from the stdlib's
+-optparse module with the features removed that we do not need from
+-optparse because we implement them in Click on a higher level (for
+-instance type handling, help formatting and a lot more).
+-
+-The plan is to remove more and more from here over time.
+-
+-The reason this is a different module and not optparse from the stdlib
+-is that there are differences in 2.x and 3.x about the error messages
+-generated and optparse in the stdlib uses gettext for no good reason
+-and might cause us issues.
+-
+-Click uses parts of optparse written by Gregory P. Ward and maintained
+-by the Python Software Foundation. This is limited to code in parser.py.
+-
+-Copyright 2001-2006 Gregory P. Ward. All rights reserved.
+-Copyright 2002-2006 Python Software Foundation. All rights reserved.
+-"""
+-# This code uses parts of optparse written by Gregory P. Ward and
+-# maintained by the Python Software Foundation.
+-# Copyright 2001-2006 Gregory P. Ward
+-# Copyright 2002-2006 Python Software Foundation
+-import re
+-from collections import deque
+-
+-from .exceptions import BadArgumentUsage
+-from .exceptions import BadOptionUsage
+-from .exceptions import NoSuchOption
+-from .exceptions import UsageError
+-
+-
+-def _unpack_args(args, nargs_spec):
+-    """Given an iterable of arguments and an iterable of nargs specifications,
+-    it returns a tuple with all the unpacked arguments at the first index
+-    and all remaining arguments as the second.
+-
+-    The nargs specification is the number of arguments that should be consumed
+-    or `-1` to indicate that this position should eat up all the remainders.
+-
+-    Missing items are filled with `None`.
+-    """
+-    args = deque(args)
+-    nargs_spec = deque(nargs_spec)
+-    rv = []
+-    spos = None
+-
+-    def _fetch(c):
+-        try:
+-            if spos is None:
+-                return c.popleft()
+-            else:
+-                return c.pop()
+-        except IndexError:
+-            return None
+-
+-    while nargs_spec:
+-        nargs = _fetch(nargs_spec)
+-        if nargs == 1:
+-            rv.append(_fetch(args))
+-        elif nargs > 1:
+-            x = [_fetch(args) for _ in range(nargs)]
+-            # If we're reversed, we're pulling in the arguments in reverse,
+-            # so we need to turn them around.
+-            if spos is not None:
+-                x.reverse()
+-            rv.append(tuple(x))
+-        elif nargs < 0:
+-            if spos is not None:
+-                raise TypeError("Cannot have two nargs < 0")
+-            spos = len(rv)
+-            rv.append(None)
+-
+-    # spos is the position of the wildcard (star).  If it's not `None`,
+-    # we fill it with the remainder.
+-    if spos is not None:
+-        rv[spos] = tuple(args)
+-        args = []
+-        rv[spos + 1 :] = reversed(rv[spos + 1 :])
+-
+-    return tuple(rv), list(args)
+-
+-
+-def _error_opt_args(nargs, opt):
+-    if nargs == 1:
+-        raise BadOptionUsage(opt, f"{opt} option requires an argument")
+-    raise BadOptionUsage(opt, f"{opt} option requires {nargs} arguments")
+-
+-
+-def split_opt(opt):
+-    first = opt[:1]
+-    if first.isalnum():
+-        return "", opt
+-    if opt[1:2] == first:
+-        return opt[:2], opt[2:]
+-    return first, opt[1:]
+-
+-
+-def normalize_opt(opt, ctx):
+-    if ctx is None or ctx.token_normalize_func is None:
+-        return opt
+-    prefix, opt = split_opt(opt)
+-    return f"{prefix}{ctx.token_normalize_func(opt)}"
+-
+-
+-def split_arg_string(string):
+-    """Given an argument string this attempts to split it into small parts."""
+-    rv = []
+-    for match in re.finditer(
+-        r"('([^'\\]*(?:\\.[^'\\]*)*)'|\"([^\"\\]*(?:\\.[^\"\\]*)*)\"|\S+)\s*",
+-        string,
+-        re.S,
+-    ):
+-        arg = match.group().strip()
+-        if arg[:1] == arg[-1:] and arg[:1] in "\"'":
+-            arg = arg[1:-1].encode("ascii", "backslashreplace").decode("unicode-escape")
+-        try:
+-            arg = type(string)(arg)
+-        except UnicodeError:
+-            pass
+-        rv.append(arg)
+-    return rv
+-
+-
+-class Option:
+-    def __init__(self, opts, dest, action=None, nargs=1, const=None, obj=None):
+-        self._short_opts = []
+-        self._long_opts = []
+-        self.prefixes = set()
+-
+-        for opt in opts:
+-            prefix, value = split_opt(opt)
+-            if not prefix:
+-                raise ValueError(f"Invalid start character for option ({opt})")
+-            self.prefixes.add(prefix[0])
+-            if len(prefix) == 1 and len(value) == 1:
+-                self._short_opts.append(opt)
+-            else:
+-                self._long_opts.append(opt)
+-                self.prefixes.add(prefix)
+-
+-        if action is None:
+-            action = "store"
+-
+-        self.dest = dest
+-        self.action = action
+-        self.nargs = nargs
+-        self.const = const
+-        self.obj = obj
+-
+-    @property
+-    def takes_value(self):
+-        return self.action in ("store", "append")
+-
+-    def process(self, value, state):
+-        if self.action == "store":
+-            state.opts[self.dest] = value
+-        elif self.action == "store_const":
+-            state.opts[self.dest] = self.const
+-        elif self.action == "append":
+-            state.opts.setdefault(self.dest, []).append(value)
+-        elif self.action == "append_const":
+-            state.opts.setdefault(self.dest, []).append(self.const)
+-        elif self.action == "count":
+-            state.opts[self.dest] = state.opts.get(self.dest, 0) + 1
+-        else:
+-            raise ValueError(f"unknown action '{self.action}'")
+-        state.order.append(self.obj)
+-
+-
+-class Argument:
+-    def __init__(self, dest, nargs=1, obj=None):
+-        self.dest = dest
+-        self.nargs = nargs
+-        self.obj = obj
+-
+-    def process(self, value, state):
+-        if self.nargs > 1:
+-            holes = sum(1 for x in value if x is None)
+-            if holes == len(value):
+-                value = None
+-            elif holes != 0:
+-                raise BadArgumentUsage(
+-                    f"argument {self.dest} takes {self.nargs} values"
+-                )
+-        state.opts[self.dest] = value
+-        state.order.append(self.obj)
+-
+-
+-class ParsingState:
+-    def __init__(self, rargs):
+-        self.opts = {}
+-        self.largs = []
+-        self.rargs = rargs
+-        self.order = []
+-
+-
+-class OptionParser:
+-    """The option parser is an internal class that is ultimately used to
+-    parse options and arguments.  It's modelled after optparse and brings
+-    a similar but vastly simplified API.  It should generally not be used
+-    directly as the high level Click classes wrap it for you.
+-
+-    It's not nearly as extensible as optparse or argparse as it does not
+-    implement features that are implemented on a higher level (such as
+-    types or defaults).
+-
+-    :param ctx: optionally the :class:`~click.Context` where this parser
+-                should go with.
+-    """
+-
+-    def __init__(self, ctx=None):
+-        #: The :class:`~click.Context` for this parser.  This might be
+-        #: `None` for some advanced use cases.
+-        self.ctx = ctx
+-        #: This controls how the parser deals with interspersed arguments.
+-        #: If this is set to `False`, the parser will stop on the first
+-        #: non-option.  Click uses this to implement nested subcommands
+-        #: safely.
+-        self.allow_interspersed_args = True
+-        #: This tells the parser how to deal with unknown options.  By
+-        #: default it will error out (which is sensible), but there is a
+-        #: second mode where it will ignore it and continue processing
+-        #: after shifting all the unknown options into the resulting args.
+-        self.ignore_unknown_options = False
+-        if ctx is not None:
+-            self.allow_interspersed_args = ctx.allow_interspersed_args
+-            self.ignore_unknown_options = ctx.ignore_unknown_options
+-        self._short_opt = {}
+-        self._long_opt = {}
+-        self._opt_prefixes = {"-", "--"}
+-        self._args = []
+-
+-    def add_option(self, opts, dest, action=None, nargs=1, const=None, obj=None):
+-        """Adds a new option named `dest` to the parser.  The destination
+-        is not inferred (unlike with optparse) and needs to be explicitly
+-        provided.  Action can be any of ``store``, ``store_const``,
+-        ``append``, ``appnd_const`` or ``count``.
+-
+-        The `obj` can be used to identify the option in the order list
+-        that is returned from the parser.
+-        """
+-        if obj is None:
+-            obj = dest
+-        opts = [normalize_opt(opt, self.ctx) for opt in opts]
+-        option = Option(opts, dest, action=action, nargs=nargs, const=const, obj=obj)
+-        self._opt_prefixes.update(option.prefixes)
+-        for opt in option._short_opts:
+-            self._short_opt[opt] = option
+-        for opt in option._long_opts:
+-            self._long_opt[opt] = option
+-
+-    def add_argument(self, dest, nargs=1, obj=None):
+-        """Adds a positional argument named `dest` to the parser.
+-
+-        The `obj` can be used to identify the option in the order list
+-        that is returned from the parser.
+-        """
+-        if obj is None:
+-            obj = dest
+-        self._args.append(Argument(dest=dest, nargs=nargs, obj=obj))
+-
+-    def parse_args(self, args):
+-        """Parses positional arguments and returns ``(values, args, order)``
+-        for the parsed options and arguments as well as the leftover
+-        arguments if there are any.  The order is a list of objects as they
+-        appear on the command line.  If arguments appear multiple times they
+-        will be memorized multiple times as well.
+-        """
+-        state = ParsingState(args)
+-        try:
+-            self._process_args_for_options(state)
+-            self._process_args_for_args(state)
+-        except UsageError:
+-            if self.ctx is None or not self.ctx.resilient_parsing:
+-                raise
+-        return state.opts, state.largs, state.order
+-
+-    def _process_args_for_args(self, state):
+-        pargs, args = _unpack_args(
+-            state.largs + state.rargs, [x.nargs for x in self._args]
+-        )
+-
+-        for idx, arg in enumerate(self._args):
+-            arg.process(pargs[idx], state)
+-
+-        state.largs = args
+-        state.rargs = []
+-
+-    def _process_args_for_options(self, state):
+-        while state.rargs:
+-            arg = state.rargs.pop(0)
+-            arglen = len(arg)
+-            # Double dashes always handled explicitly regardless of what
+-            # prefixes are valid.
+-            if arg == "--":
+-                return
+-            elif arg[:1] in self._opt_prefixes and arglen > 1:
+-                self._process_opts(arg, state)
+-            elif self.allow_interspersed_args:
+-                state.largs.append(arg)
+-            else:
+-                state.rargs.insert(0, arg)
+-                return
+-
+-        # Say this is the original argument list:
+-        # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)]
+-        #                            ^
+-        # (we are about to process arg(i)).
+-        #
+-        # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of
+-        # [arg0, ..., arg(i-1)] (any options and their arguments will have
+-        # been removed from largs).
+-        #
+-        # The while loop will usually consume 1 or more arguments per pass.
+-        # If it consumes 1 (eg. arg is an option that takes no arguments),
+-        # then after _process_arg() is done the situation is:
+-        #
+-        #   largs = subset of [arg0, ..., arg(i)]
+-        #   rargs = [arg(i+1), ..., arg(N-1)]
+-        #
+-        # If allow_interspersed_args is false, largs will always be
+-        # *empty* -- still a subset of [arg0, ..., arg(i-1)], but
+-        # not a very interesting subset!
+-
+-    def _match_long_opt(self, opt, explicit_value, state):
+-        if opt not in self._long_opt:
+-            possibilities = [word for word in self._long_opt if word.startswith(opt)]
+-            raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx)
+-
+-        option = self._long_opt[opt]
+-        if option.takes_value:
+-            # At this point it's safe to modify rargs by injecting the
+-            # explicit value, because no exception is raised in this
+-            # branch.  This means that the inserted value will be fully
+-            # consumed.
+-            if explicit_value is not None:
+-                state.rargs.insert(0, explicit_value)
+-
+-            nargs = option.nargs
+-            if len(state.rargs) < nargs:
+-                _error_opt_args(nargs, opt)
+-            elif nargs == 1:
+-                value = state.rargs.pop(0)
+-            else:
+-                value = tuple(state.rargs[:nargs])
+-                del state.rargs[:nargs]
+-
+-        elif explicit_value is not None:
+-            raise BadOptionUsage(opt, f"{opt} option does not take a value")
+-
+-        else:
+-            value = None
+-
+-        option.process(value, state)
+-
+-    def _match_short_opt(self, arg, state):
+-        stop = False
+-        i = 1
+-        prefix = arg[0]
+-        unknown_options = []
+-
+-        for ch in arg[1:]:
+-            opt = normalize_opt(f"{prefix}{ch}", self.ctx)
+-            option = self._short_opt.get(opt)
+-            i += 1
+-
+-            if not option:
+-                if self.ignore_unknown_options:
+-                    unknown_options.append(ch)
+-                    continue
+-                raise NoSuchOption(opt, ctx=self.ctx)
+-            if option.takes_value:
+-                # Any characters left in arg?  Pretend they're the
+-                # next arg, and stop consuming characters of arg.
+-                if i < len(arg):
+-                    state.rargs.insert(0, arg[i:])
+-                    stop = True
+-
+-                nargs = option.nargs
+-                if len(state.rargs) < nargs:
+-                    _error_opt_args(nargs, opt)
+-                elif nargs == 1:
+-                    value = state.rargs.pop(0)
+-                else:
+-                    value = tuple(state.rargs[:nargs])
+-                    del state.rargs[:nargs]
+-
+-            else:
+-                value = None
+-
+-            option.process(value, state)
+-
+-            if stop:
+-                break
+-
+-        # If we got any unknown options we re-combinate the string of the
+-        # remaining options and re-attach the prefix, then report that
+-        # to the state as new larg.  This way there is basic combinatorics
+-        # that can be achieved while still ignoring unknown arguments.
+-        if self.ignore_unknown_options and unknown_options:
+-            state.largs.append(f"{prefix}{''.join(unknown_options)}")
+-
+-    def _process_opts(self, arg, state):
+-        explicit_value = None
+-        # Long option handling happens in two parts.  The first part is
+-        # supporting explicitly attached values.  In any case, we will try
+-        # to long match the option first.
+-        if "=" in arg:
+-            long_opt, explicit_value = arg.split("=", 1)
+-        else:
+-            long_opt = arg
+-        norm_long_opt = normalize_opt(long_opt, self.ctx)
+-
+-        # At this point we will match the (assumed) long option through
+-        # the long option matching code.  Note that this allows options
+-        # like "-foo" to be matched as long options.
+-        try:
+-            self._match_long_opt(norm_long_opt, explicit_value, state)
+-        except NoSuchOption:
+-            # At this point the long option matching failed, and we need
+-            # to try with short options.  However there is a special rule
+-            # which says, that if we have a two character options prefix
+-            # (applies to "--foo" for instance), we do not dispatch to the
+-            # short option code and will instead raise the no option
+-            # error.
+-            if arg[:2] not in self._opt_prefixes:
+-                return self._match_short_opt(arg, state)
+-            if not self.ignore_unknown_options:
+-                raise
+-            state.largs.append(arg)
+diff --git a/dynaconf/vendor_src/click/termui.py b/dynaconf/vendor_src/click/termui.py
+deleted file mode 100644
+index a1bdf2a..0000000
+--- a/dynaconf/vendor_src/click/termui.py
++++ /dev/null
+@@ -1,688 +0,0 @@
+-import inspect
+-import io
+-import itertools
+-import os
+-import struct
+-import sys
+-
+-from ._compat import DEFAULT_COLUMNS
+-from ._compat import get_winterm_size
+-from ._compat import isatty
+-from ._compat import strip_ansi
+-from ._compat import WIN
+-from .exceptions import Abort
+-from .exceptions import UsageError
+-from .globals import resolve_color_default
+-from .types import Choice
+-from .types import convert_type
+-from .types import Path
+-from .utils import echo
+-from .utils import LazyFile
+-
+-# The prompt functions to use.  The doc tools currently override these
+-# functions to customize how they work.
+-visible_prompt_func = input
+-
+-_ansi_colors = {
+-    "black": 30,
+-    "red": 31,
+-    "green": 32,
+-    "yellow": 33,
+-    "blue": 34,
+-    "magenta": 35,
+-    "cyan": 36,
+-    "white": 37,
+-    "reset": 39,
+-    "bright_black": 90,
+-    "bright_red": 91,
+-    "bright_green": 92,
+-    "bright_yellow": 93,
+-    "bright_blue": 94,
+-    "bright_magenta": 95,
+-    "bright_cyan": 96,
+-    "bright_white": 97,
+-}
+-_ansi_reset_all = "\033[0m"
+-
+-
+-def hidden_prompt_func(prompt):
+-    import getpass
+-
+-    return getpass.getpass(prompt)
+-
+-
+-def _build_prompt(
+-    text, suffix, show_default=False, default=None, show_choices=True, type=None
+-):
+-    prompt = text
+-    if type is not None and show_choices and isinstance(type, Choice):
+-        prompt += f" ({', '.join(map(str, type.choices))})"
+-    if default is not None and show_default:
+-        prompt = f"{prompt} [{_format_default(default)}]"
+-    return f"{prompt}{suffix}"
+-
+-
+-def _format_default(default):
+-    if isinstance(default, (io.IOBase, LazyFile)) and hasattr(default, "name"):
+-        return default.name
+-
+-    return default
+-
+-
+-def prompt(
+-    text,
+-    default=None,
+-    hide_input=False,
+-    confirmation_prompt=False,
+-    type=None,
+-    value_proc=None,
+-    prompt_suffix=": ",
+-    show_default=True,
+-    err=False,
+-    show_choices=True,
+-):
+-    """Prompts a user for input.  This is a convenience function that can
+-    be used to prompt a user for input later.
+-
+-    If the user aborts the input by sending a interrupt signal, this
+-    function will catch it and raise a :exc:`Abort` exception.
+-
+-    .. versionadded:: 7.0
+-       Added the show_choices parameter.
+-
+-    .. versionadded:: 6.0
+-       Added unicode support for cmd.exe on Windows.
+-
+-    .. versionadded:: 4.0
+-       Added the `err` parameter.
+-
+-    :param text: the text to show for the prompt.
+-    :param default: the default value to use if no input happens.  If this
+-                    is not given it will prompt until it's aborted.
+-    :param hide_input: if this is set to true then the input value will
+-                       be hidden.
+-    :param confirmation_prompt: asks for confirmation for the value.
+-    :param type: the type to use to check the value against.
+-    :param value_proc: if this parameter is provided it's a function that
+-                       is invoked instead of the type conversion to
+-                       convert a value.
+-    :param prompt_suffix: a suffix that should be added to the prompt.
+-    :param show_default: shows or hides the default value in the prompt.
+-    :param err: if set to true the file defaults to ``stderr`` instead of
+-                ``stdout``, the same as with echo.
+-    :param show_choices: Show or hide choices if the passed type is a Choice.
+-                         For example if type is a Choice of either day or week,
+-                         show_choices is true and text is "Group by" then the
+-                         prompt will be "Group by (day, week): ".
+-    """
+-    result = None
+-
+-    def prompt_func(text):
+-        f = hidden_prompt_func if hide_input else visible_prompt_func
+-        try:
+-            # Write the prompt separately so that we get nice
+-            # coloring through colorama on Windows
+-            echo(text, nl=False, err=err)
+-            return f("")
+-        except (KeyboardInterrupt, EOFError):
+-            # getpass doesn't print a newline if the user aborts input with ^C.
+-            # Allegedly this behavior is inherited from getpass(3).
+-            # A doc bug has been filed at https://bugs.python.org/issue24711
+-            if hide_input:
+-                echo(None, err=err)
+-            raise Abort()
+-
+-    if value_proc is None:
+-        value_proc = convert_type(type, default)
+-
+-    prompt = _build_prompt(
+-        text, prompt_suffix, show_default, default, show_choices, type
+-    )
+-
+-    while 1:
+-        while 1:
+-            value = prompt_func(prompt)
+-            if value:
+-                break
+-            elif default is not None:
+-                if isinstance(value_proc, Path):
+-                    # validate Path default value(exists, dir_okay etc.)
+-                    value = default
+-                    break
+-                return default
+-        try:
+-            result = value_proc(value)
+-        except UsageError as e:
+-            echo(f"Error: {e.message}", err=err)  # noqa: B306
+-            continue
+-        if not confirmation_prompt:
+-            return result
+-        while 1:
+-            value2 = prompt_func("Repeat for confirmation: ")
+-            if value2:
+-                break
+-        if value == value2:
+-            return result
+-        echo("Error: the two entered values do not match", err=err)
+-
+-
+-def confirm(
+-    text, default=False, abort=False, prompt_suffix=": ", show_default=True, err=False
+-):
+-    """Prompts for confirmation (yes/no question).
+-
+-    If the user aborts the input by sending a interrupt signal this
+-    function will catch it and raise a :exc:`Abort` exception.
+-
+-    .. versionadded:: 4.0
+-       Added the `err` parameter.
+-
+-    :param text: the question to ask.
+-    :param default: the default for the prompt.
+-    :param abort: if this is set to `True` a negative answer aborts the
+-                  exception by raising :exc:`Abort`.
+-    :param prompt_suffix: a suffix that should be added to the prompt.
+-    :param show_default: shows or hides the default value in the prompt.
+-    :param err: if set to true the file defaults to ``stderr`` instead of
+-                ``stdout``, the same as with echo.
+-    """
+-    prompt = _build_prompt(
+-        text, prompt_suffix, show_default, "Y/n" if default else "y/N"
+-    )
+-    while 1:
+-        try:
+-            # Write the prompt separately so that we get nice
+-            # coloring through colorama on Windows
+-            echo(prompt, nl=False, err=err)
+-            value = visible_prompt_func("").lower().strip()
+-        except (KeyboardInterrupt, EOFError):
+-            raise Abort()
+-        if value in ("y", "yes"):
+-            rv = True
+-        elif value in ("n", "no"):
+-            rv = False
+-        elif value == "":
+-            rv = default
+-        else:
+-            echo("Error: invalid input", err=err)
+-            continue
+-        break
+-    if abort and not rv:
+-        raise Abort()
+-    return rv
+-
+-
+-def get_terminal_size():
+-    """Returns the current size of the terminal as tuple in the form
+-    ``(width, height)`` in columns and rows.
+-    """
+-    import shutil
+-
+-    if hasattr(shutil, "get_terminal_size"):
+-        return shutil.get_terminal_size()
+-
+-    # We provide a sensible default for get_winterm_size() when being invoked
+-    # inside a subprocess. Without this, it would not provide a useful input.
+-    if get_winterm_size is not None:
+-        size = get_winterm_size()
+-        if size == (0, 0):
+-            return (79, 24)
+-        else:
+-            return size
+-
+-    def ioctl_gwinsz(fd):
+-        try:
+-            import fcntl
+-            import termios
+-
+-            cr = struct.unpack("hh", fcntl.ioctl(fd, termios.TIOCGWINSZ, "1234"))
+-        except Exception:
+-            return
+-        return cr
+-
+-    cr = ioctl_gwinsz(0) or ioctl_gwinsz(1) or ioctl_gwinsz(2)
+-    if not cr:
+-        try:
+-            fd = os.open(os.ctermid(), os.O_RDONLY)
+-            try:
+-                cr = ioctl_gwinsz(fd)
+-            finally:
+-                os.close(fd)
+-        except Exception:
+-            pass
+-    if not cr or not cr[0] or not cr[1]:
+-        cr = (os.environ.get("LINES", 25), os.environ.get("COLUMNS", DEFAULT_COLUMNS))
+-    return int(cr[1]), int(cr[0])
+-
+-
+-def echo_via_pager(text_or_generator, color=None):
+-    """This function takes a text and shows it via an environment specific
+-    pager on stdout.
+-
+-    .. versionchanged:: 3.0
+-       Added the `color` flag.
+-
+-    :param text_or_generator: the text to page, or alternatively, a
+-                              generator emitting the text to page.
+-    :param color: controls if the pager supports ANSI colors or not.  The
+-                  default is autodetection.
+-    """
+-    color = resolve_color_default(color)
+-
+-    if inspect.isgeneratorfunction(text_or_generator):
+-        i = text_or_generator()
+-    elif isinstance(text_or_generator, str):
+-        i = [text_or_generator]
+-    else:
+-        i = iter(text_or_generator)
+-
+-    # convert every element of i to a text type if necessary
+-    text_generator = (el if isinstance(el, str) else str(el) for el in i)
+-
+-    from ._termui_impl import pager
+-
+-    return pager(itertools.chain(text_generator, "\n"), color)
+-
+-
+-def progressbar(
+-    iterable=None,
+-    length=None,
+-    label=None,
+-    show_eta=True,
+-    show_percent=None,
+-    show_pos=False,
+-    item_show_func=None,
+-    fill_char="#",
+-    empty_char="-",
+-    bar_template="%(label)s  [%(bar)s]  %(info)s",
+-    info_sep="  ",
+-    width=36,
+-    file=None,
+-    color=None,
+-):
+-    """This function creates an iterable context manager that can be used
+-    to iterate over something while showing a progress bar.  It will
+-    either iterate over the `iterable` or `length` items (that are counted
+-    up).  While iteration happens, this function will print a rendered
+-    progress bar to the given `file` (defaults to stdout) and will attempt
+-    to calculate remaining time and more.  By default, this progress bar
+-    will not be rendered if the file is not a terminal.
+-
+-    The context manager creates the progress bar.  When the context
+-    manager is entered the progress bar is already created.  With every
+-    iteration over the progress bar, the iterable passed to the bar is
+-    advanced and the bar is updated.  When the context manager exits,
+-    a newline is printed and the progress bar is finalized on screen.
+-
+-    Note: The progress bar is currently designed for use cases where the
+-    total progress can be expected to take at least several seconds.
+-    Because of this, the ProgressBar class object won't display
+-    progress that is considered too fast, and progress where the time
+-    between steps is less than a second.
+-
+-    No printing must happen or the progress bar will be unintentionally
+-    destroyed.
+-
+-    Example usage::
+-
+-        with progressbar(items) as bar:
+-            for item in bar:
+-                do_something_with(item)
+-
+-    Alternatively, if no iterable is specified, one can manually update the
+-    progress bar through the `update()` method instead of directly
+-    iterating over the progress bar.  The update method accepts the number
+-    of steps to increment the bar with::
+-
+-        with progressbar(length=chunks.total_bytes) as bar:
+-            for chunk in chunks:
+-                process_chunk(chunk)
+-                bar.update(chunks.bytes)
+-
+-    The ``update()`` method also takes an optional value specifying the
+-    ``current_item`` at the new position. This is useful when used
+-    together with ``item_show_func`` to customize the output for each
+-    manual step::
+-
+-        with click.progressbar(
+-            length=total_size,
+-            label='Unzipping archive',
+-            item_show_func=lambda a: a.filename
+-        ) as bar:
+-            for archive in zip_file:
+-                archive.extract()
+-                bar.update(archive.size, archive)
+-
+-    .. versionadded:: 2.0
+-
+-    .. versionadded:: 4.0
+-       Added the `color` parameter.  Added a `update` method to the
+-       progressbar object.
+-
+-    :param iterable: an iterable to iterate over.  If not provided the length
+-                     is required.
+-    :param length: the number of items to iterate over.  By default the
+-                   progressbar will attempt to ask the iterator about its
+-                   length, which might or might not work.  If an iterable is
+-                   also provided this parameter can be used to override the
+-                   length.  If an iterable is not provided the progress bar
+-                   will iterate over a range of that length.
+-    :param label: the label to show next to the progress bar.
+-    :param show_eta: enables or disables the estimated time display.  This is
+-                     automatically disabled if the length cannot be
+-                     determined.
+-    :param show_percent: enables or disables the percentage display.  The
+-                         default is `True` if the iterable has a length or
+-                         `False` if not.
+-    :param show_pos: enables or disables the absolute position display.  The
+-                     default is `False`.
+-    :param item_show_func: a function called with the current item which
+-                           can return a string to show the current item
+-                           next to the progress bar.  Note that the current
+-                           item can be `None`!
+-    :param fill_char: the character to use to show the filled part of the
+-                      progress bar.
+-    :param empty_char: the character to use to show the non-filled part of
+-                       the progress bar.
+-    :param bar_template: the format string to use as template for the bar.
+-                         The parameters in it are ``label`` for the label,
+-                         ``bar`` for the progress bar and ``info`` for the
+-                         info section.
+-    :param info_sep: the separator between multiple info items (eta etc.)
+-    :param width: the width of the progress bar in characters, 0 means full
+-                  terminal width
+-    :param file: the file to write to.  If this is not a terminal then
+-                 only the label is printed.
+-    :param color: controls if the terminal supports ANSI colors or not.  The
+-                  default is autodetection.  This is only needed if ANSI
+-                  codes are included anywhere in the progress bar output
+-                  which is not the case by default.
+-    """
+-    from ._termui_impl import ProgressBar
+-
+-    color = resolve_color_default(color)
+-    return ProgressBar(
+-        iterable=iterable,
+-        length=length,
+-        show_eta=show_eta,
+-        show_percent=show_percent,
+-        show_pos=show_pos,
+-        item_show_func=item_show_func,
+-        fill_char=fill_char,
+-        empty_char=empty_char,
+-        bar_template=bar_template,
+-        info_sep=info_sep,
+-        file=file,
+-        label=label,
+-        width=width,
+-        color=color,
+-    )
+-
+-
+-def clear():
+-    """Clears the terminal screen.  This will have the effect of clearing
+-    the whole visible space of the terminal and moving the cursor to the
+-    top left.  This does not do anything if not connected to a terminal.
+-
+-    .. versionadded:: 2.0
+-    """
+-    if not isatty(sys.stdout):
+-        return
+-    # If we're on Windows and we don't have colorama available, then we
+-    # clear the screen by shelling out.  Otherwise we can use an escape
+-    # sequence.
+-    if WIN:
+-        os.system("cls")
+-    else:
+-        sys.stdout.write("\033[2J\033[1;1H")
+-
+-
+-def style(
+-    text,
+-    fg=None,
+-    bg=None,
+-    bold=None,
+-    dim=None,
+-    underline=None,
+-    blink=None,
+-    reverse=None,
+-    reset=True,
+-):
+-    """Styles a text with ANSI styles and returns the new string.  By
+-    default the styling is self contained which means that at the end
+-    of the string a reset code is issued.  This can be prevented by
+-    passing ``reset=False``.
+-
+-    Examples::
+-
+-        click.echo(click.style('Hello World!', fg='green'))
+-        click.echo(click.style('ATTENTION!', blink=True))
+-        click.echo(click.style('Some things', reverse=True, fg='cyan'))
+-
+-    Supported color names:
+-
+-    * ``black`` (might be a gray)
+-    * ``red``
+-    * ``green``
+-    * ``yellow`` (might be an orange)
+-    * ``blue``
+-    * ``magenta``
+-    * ``cyan``
+-    * ``white`` (might be light gray)
+-    * ``bright_black``
+-    * ``bright_red``
+-    * ``bright_green``
+-    * ``bright_yellow``
+-    * ``bright_blue``
+-    * ``bright_magenta``
+-    * ``bright_cyan``
+-    * ``bright_white``
+-    * ``reset`` (reset the color code only)
+-
+-    .. versionadded:: 2.0
+-
+-    .. versionadded:: 7.0
+-       Added support for bright colors.
+-
+-    :param text: the string to style with ansi codes.
+-    :param fg: if provided this will become the foreground color.
+-    :param bg: if provided this will become the background color.
+-    :param bold: if provided this will enable or disable bold mode.
+-    :param dim: if provided this will enable or disable dim mode.  This is
+-                badly supported.
+-    :param underline: if provided this will enable or disable underline.
+-    :param blink: if provided this will enable or disable blinking.
+-    :param reverse: if provided this will enable or disable inverse
+-                    rendering (foreground becomes background and the
+-                    other way round).
+-    :param reset: by default a reset-all code is added at the end of the
+-                  string which means that styles do not carry over.  This
+-                  can be disabled to compose styles.
+-    """
+-    bits = []
+-    if fg:
+-        try:
+-            bits.append(f"\033[{_ansi_colors[fg]}m")
+-        except KeyError:
+-            raise TypeError(f"Unknown color {fg!r}")
+-    if bg:
+-        try:
+-            bits.append(f"\033[{_ansi_colors[bg] + 10}m")
+-        except KeyError:
+-            raise TypeError(f"Unknown color {bg!r}")
+-    if bold is not None:
+-        bits.append(f"\033[{1 if bold else 22}m")
+-    if dim is not None:
+-        bits.append(f"\033[{2 if dim else 22}m")
+-    if underline is not None:
+-        bits.append(f"\033[{4 if underline else 24}m")
+-    if blink is not None:
+-        bits.append(f"\033[{5 if blink else 25}m")
+-    if reverse is not None:
+-        bits.append(f"\033[{7 if reverse else 27}m")
+-    bits.append(text)
+-    if reset:
+-        bits.append(_ansi_reset_all)
+-    return "".join(bits)
+-
+-
+-def unstyle(text):
+-    """Removes ANSI styling information from a string.  Usually it's not
+-    necessary to use this function as Click's echo function will
+-    automatically remove styling if necessary.
+-
+-    .. versionadded:: 2.0
+-
+-    :param text: the text to remove style information from.
+-    """
+-    return strip_ansi(text)
+-
+-
+-def secho(message=None, file=None, nl=True, err=False, color=None, **styles):
+-    """This function combines :func:`echo` and :func:`style` into one
+-    call.  As such the following two calls are the same::
+-
+-        click.secho('Hello World!', fg='green')
+-        click.echo(click.style('Hello World!', fg='green'))
+-
+-    All keyword arguments are forwarded to the underlying functions
+-    depending on which one they go with.
+-
+-    .. versionadded:: 2.0
+-    """
+-    if message is not None:
+-        message = style(message, **styles)
+-    return echo(message, file=file, nl=nl, err=err, color=color)
+-
+-
+-def edit(
+-    text=None, editor=None, env=None, require_save=True, extension=".txt", filename=None
+-):
+-    r"""Edits the given text in the defined editor.  If an editor is given
+-    (should be the full path to the executable but the regular operating
+-    system search path is used for finding the executable) it overrides
+-    the detected editor.  Optionally, some environment variables can be
+-    used.  If the editor is closed without changes, `None` is returned.  In
+-    case a file is edited directly the return value is always `None` and
+-    `require_save` and `extension` are ignored.
+-
+-    If the editor cannot be opened a :exc:`UsageError` is raised.
+-
+-    Note for Windows: to simplify cross-platform usage, the newlines are
+-    automatically converted from POSIX to Windows and vice versa.  As such,
+-    the message here will have ``\n`` as newline markers.
+-
+-    :param text: the text to edit.
+-    :param editor: optionally the editor to use.  Defaults to automatic
+-                   detection.
+-    :param env: environment variables to forward to the editor.
+-    :param require_save: if this is true, then not saving in the editor
+-                         will make the return value become `None`.
+-    :param extension: the extension to tell the editor about.  This defaults
+-                      to `.txt` but changing this might change syntax
+-                      highlighting.
+-    :param filename: if provided it will edit this file instead of the
+-                     provided text contents.  It will not use a temporary
+-                     file as an indirection in that case.
+-    """
+-    from ._termui_impl import Editor
+-
+-    editor = Editor(
+-        editor=editor, env=env, require_save=require_save, extension=extension
+-    )
+-    if filename is None:
+-        return editor.edit(text)
+-    editor.edit_file(filename)
+-
+-
+-def launch(url, wait=False, locate=False):
+-    """This function launches the given URL (or filename) in the default
+-    viewer application for this file type.  If this is an executable, it
+-    might launch the executable in a new session.  The return value is
+-    the exit code of the launched application.  Usually, ``0`` indicates
+-    success.
+-
+-    Examples::
+-
+-        click.launch('https://click.palletsprojects.com/')
+-        click.launch('/my/downloaded/file', locate=True)
+-
+-    .. versionadded:: 2.0
+-
+-    :param url: URL or filename of the thing to launch.
+-    :param wait: waits for the program to stop.
+-    :param locate: if this is set to `True` then instead of launching the
+-                   application associated with the URL it will attempt to
+-                   launch a file manager with the file located.  This
+-                   might have weird effects if the URL does not point to
+-                   the filesystem.
+-    """
+-    from ._termui_impl import open_url
+-
+-    return open_url(url, wait=wait, locate=locate)
+-
+-
+-# If this is provided, getchar() calls into this instead.  This is used
+-# for unittesting purposes.
+-_getchar = None
+-
+-
+-def getchar(echo=False):
+-    """Fetches a single character from the terminal and returns it.  This
+-    will always return a unicode character and under certain rare
+-    circumstances this might return more than one character.  The
+-    situations which more than one character is returned is when for
+-    whatever reason multiple characters end up in the terminal buffer or
+-    standard input was not actually a terminal.
+-
+-    Note that this will always read from the terminal, even if something
+-    is piped into the standard input.
+-
+-    Note for Windows: in rare cases when typing non-ASCII characters, this
+-    function might wait for a second character and then return both at once.
+-    This is because certain Unicode characters look like special-key markers.
+-
+-    .. versionadded:: 2.0
+-
+-    :param echo: if set to `True`, the character read will also show up on
+-                 the terminal.  The default is to not show it.
+-    """
+-    f = _getchar
+-    if f is None:
+-        from ._termui_impl import getchar as f
+-    return f(echo)
+-
+-
+-def raw_terminal():
+-    from ._termui_impl import raw_terminal as f
+-
+-    return f()
+-
+-
+-def pause(info="Press any key to continue ...", err=False):
+-    """This command stops execution and waits for the user to press any
+-    key to continue.  This is similar to the Windows batch "pause"
+-    command.  If the program is not run through a terminal, this command
+-    will instead do nothing.
+-
+-    .. versionadded:: 2.0
+-
+-    .. versionadded:: 4.0
+-       Added the `err` parameter.
+-
+-    :param info: the info string to print before pausing.
+-    :param err: if set to message goes to ``stderr`` instead of
+-                ``stdout``, the same as with echo.
+-    """
+-    if not isatty(sys.stdin) or not isatty(sys.stdout):
+-        return
+-    try:
+-        if info:
+-            echo(info, nl=False, err=err)
+-        try:
+-            getchar()
+-        except (KeyboardInterrupt, EOFError):
+-            pass
+-    finally:
+-        if info:
+-            echo(err=err)
+diff --git a/dynaconf/vendor_src/click/testing.py b/dynaconf/vendor_src/click/testing.py
+deleted file mode 100644
+index fd6bf61..0000000
+--- a/dynaconf/vendor_src/click/testing.py
++++ /dev/null
+@@ -1,362 +0,0 @@
+-import contextlib
+-import io
+-import os
+-import shlex
+-import shutil
+-import sys
+-import tempfile
+-
+-from . import formatting
+-from . import termui
+-from . import utils
+-from ._compat import _find_binary_reader
+-
+-
+-class EchoingStdin:
+-    def __init__(self, input, output):
+-        self._input = input
+-        self._output = output
+-
+-    def __getattr__(self, x):
+-        return getattr(self._input, x)
+-
+-    def _echo(self, rv):
+-        self._output.write(rv)
+-        return rv
+-
+-    def read(self, n=-1):
+-        return self._echo(self._input.read(n))
+-
+-    def readline(self, n=-1):
+-        return self._echo(self._input.readline(n))
+-
+-    def readlines(self):
+-        return [self._echo(x) for x in self._input.readlines()]
+-
+-    def __iter__(self):
+-        return iter(self._echo(x) for x in self._input)
+-
+-    def __repr__(self):
+-        return repr(self._input)
+-
+-
+-def make_input_stream(input, charset):
+-    # Is already an input stream.
+-    if hasattr(input, "read"):
+-        rv = _find_binary_reader(input)
+-
+-        if rv is not None:
+-            return rv
+-
+-        raise TypeError("Could not find binary reader for input stream.")
+-
+-    if input is None:
+-        input = b""
+-    elif not isinstance(input, bytes):
+-        input = input.encode(charset)
+-
+-    return io.BytesIO(input)
+-
+-
+-class Result:
+-    """Holds the captured result of an invoked CLI script."""
+-
+-    def __init__(
+-        self, runner, stdout_bytes, stderr_bytes, exit_code, exception, exc_info=None
+-    ):
+-        #: The runner that created the result
+-        self.runner = runner
+-        #: The standard output as bytes.
+-        self.stdout_bytes = stdout_bytes
+-        #: The standard error as bytes, or None if not available
+-        self.stderr_bytes = stderr_bytes
+-        #: The exit code as integer.
+-        self.exit_code = exit_code
+-        #: The exception that happened if one did.
+-        self.exception = exception
+-        #: The traceback
+-        self.exc_info = exc_info
+-
+-    @property
+-    def output(self):
+-        """The (standard) output as unicode string."""
+-        return self.stdout
+-
+-    @property
+-    def stdout(self):
+-        """The standard output as unicode string."""
+-        return self.stdout_bytes.decode(self.runner.charset, "replace").replace(
+-            "\r\n", "\n"
+-        )
+-
+-    @property
+-    def stderr(self):
+-        """The standard error as unicode string."""
+-        if self.stderr_bytes is None:
+-            raise ValueError("stderr not separately captured")
+-        return self.stderr_bytes.decode(self.runner.charset, "replace").replace(
+-            "\r\n", "\n"
+-        )
+-
+-    def __repr__(self):
+-        exc_str = repr(self.exception) if self.exception else "okay"
+-        return f"<{type(self).__name__} {exc_str}>"
+-
+-
+-class CliRunner:
+-    """The CLI runner provides functionality to invoke a Click command line
+-    script for unittesting purposes in a isolated environment.  This only
+-    works in single-threaded systems without any concurrency as it changes the
+-    global interpreter state.
+-
+-    :param charset: the character set for the input and output data.
+-    :param env: a dictionary with environment variables for overriding.
+-    :param echo_stdin: if this is set to `True`, then reading from stdin writes
+-                       to stdout.  This is useful for showing examples in
+-                       some circumstances.  Note that regular prompts
+-                       will automatically echo the input.
+-    :param mix_stderr: if this is set to `False`, then stdout and stderr are
+-                       preserved as independent streams.  This is useful for
+-                       Unix-philosophy apps that have predictable stdout and
+-                       noisy stderr, such that each may be measured
+-                       independently
+-    """
+-
+-    def __init__(self, charset="utf-8", env=None, echo_stdin=False, mix_stderr=True):
+-        self.charset = charset
+-        self.env = env or {}
+-        self.echo_stdin = echo_stdin
+-        self.mix_stderr = mix_stderr
+-
+-    def get_default_prog_name(self, cli):
+-        """Given a command object it will return the default program name
+-        for it.  The default is the `name` attribute or ``"root"`` if not
+-        set.
+-        """
+-        return cli.name or "root"
+-
+-    def make_env(self, overrides=None):
+-        """Returns the environment overrides for invoking a script."""
+-        rv = dict(self.env)
+-        if overrides:
+-            rv.update(overrides)
+-        return rv
+-
+-    @contextlib.contextmanager
+-    def isolation(self, input=None, env=None, color=False):
+-        """A context manager that sets up the isolation for invoking of a
+-        command line tool.  This sets up stdin with the given input data
+-        and `os.environ` with the overrides from the given dictionary.
+-        This also rebinds some internals in Click to be mocked (like the
+-        prompt functionality).
+-
+-        This is automatically done in the :meth:`invoke` method.
+-
+-        .. versionadded:: 4.0
+-           The ``color`` parameter was added.
+-
+-        :param input: the input stream to put into sys.stdin.
+-        :param env: the environment overrides as dictionary.
+-        :param color: whether the output should contain color codes. The
+-                      application can still override this explicitly.
+-        """
+-        input = make_input_stream(input, self.charset)
+-
+-        old_stdin = sys.stdin
+-        old_stdout = sys.stdout
+-        old_stderr = sys.stderr
+-        old_forced_width = formatting.FORCED_WIDTH
+-        formatting.FORCED_WIDTH = 80
+-
+-        env = self.make_env(env)
+-
+-        bytes_output = io.BytesIO()
+-
+-        if self.echo_stdin:
+-            input = EchoingStdin(input, bytes_output)
+-
+-        input = io.TextIOWrapper(input, encoding=self.charset)
+-        sys.stdout = io.TextIOWrapper(bytes_output, encoding=self.charset)
+-
+-        if not self.mix_stderr:
+-            bytes_error = io.BytesIO()
+-            sys.stderr = io.TextIOWrapper(bytes_error, encoding=self.charset)
+-
+-        if self.mix_stderr:
+-            sys.stderr = sys.stdout
+-
+-        sys.stdin = input
+-
+-        def visible_input(prompt=None):
+-            sys.stdout.write(prompt or "")
+-            val = input.readline().rstrip("\r\n")
+-            sys.stdout.write(f"{val}\n")
+-            sys.stdout.flush()
+-            return val
+-
+-        def hidden_input(prompt=None):
+-            sys.stdout.write(f"{prompt or ''}\n")
+-            sys.stdout.flush()
+-            return input.readline().rstrip("\r\n")
+-
+-        def _getchar(echo):
+-            char = sys.stdin.read(1)
+-            if echo:
+-                sys.stdout.write(char)
+-                sys.stdout.flush()
+-            return char
+-
+-        default_color = color
+-
+-        def should_strip_ansi(stream=None, color=None):
+-            if color is None:
+-                return not default_color
+-            return not color
+-
+-        old_visible_prompt_func = termui.visible_prompt_func
+-        old_hidden_prompt_func = termui.hidden_prompt_func
+-        old__getchar_func = termui._getchar
+-        old_should_strip_ansi = utils.should_strip_ansi
+-        termui.visible_prompt_func = visible_input
+-        termui.hidden_prompt_func = hidden_input
+-        termui._getchar = _getchar
+-        utils.should_strip_ansi = should_strip_ansi
+-
+-        old_env = {}
+-        try:
+-            for key, value in env.items():
+-                old_env[key] = os.environ.get(key)
+-                if value is None:
+-                    try:
+-                        del os.environ[key]
+-                    except Exception:
+-                        pass
+-                else:
+-                    os.environ[key] = value
+-            yield (bytes_output, not self.mix_stderr and bytes_error)
+-        finally:
+-            for key, value in old_env.items():
+-                if value is None:
+-                    try:
+-                        del os.environ[key]
+-                    except Exception:
+-                        pass
+-                else:
+-                    os.environ[key] = value
+-            sys.stdout = old_stdout
+-            sys.stderr = old_stderr
+-            sys.stdin = old_stdin
+-            termui.visible_prompt_func = old_visible_prompt_func
+-            termui.hidden_prompt_func = old_hidden_prompt_func
+-            termui._getchar = old__getchar_func
+-            utils.should_strip_ansi = old_should_strip_ansi
+-            formatting.FORCED_WIDTH = old_forced_width
+-
+-    def invoke(
+-        self,
+-        cli,
+-        args=None,
+-        input=None,
+-        env=None,
+-        catch_exceptions=True,
+-        color=False,
+-        **extra,
+-    ):
+-        """Invokes a command in an isolated environment.  The arguments are
+-        forwarded directly to the command line script, the `extra` keyword
+-        arguments are passed to the :meth:`~clickpkg.Command.main` function of
+-        the command.
+-
+-        This returns a :class:`Result` object.
+-
+-        .. versionadded:: 3.0
+-           The ``catch_exceptions`` parameter was added.
+-
+-        .. versionchanged:: 3.0
+-           The result object now has an `exc_info` attribute with the
+-           traceback if available.
+-
+-        .. versionadded:: 4.0
+-           The ``color`` parameter was added.
+-
+-        :param cli: the command to invoke
+-        :param args: the arguments to invoke. It may be given as an iterable
+-                     or a string. When given as string it will be interpreted
+-                     as a Unix shell command. More details at
+-                     :func:`shlex.split`.
+-        :param input: the input data for `sys.stdin`.
+-        :param env: the environment overrides.
+-        :param catch_exceptions: Whether to catch any other exceptions than
+-                                 ``SystemExit``.
+-        :param extra: the keyword arguments to pass to :meth:`main`.
+-        :param color: whether the output should contain color codes. The
+-                      application can still override this explicitly.
+-        """
+-        exc_info = None
+-        with self.isolation(input=input, env=env, color=color) as outstreams:
+-            exception = None
+-            exit_code = 0
+-
+-            if isinstance(args, str):
+-                args = shlex.split(args)
+-
+-            try:
+-                prog_name = extra.pop("prog_name")
+-            except KeyError:
+-                prog_name = self.get_default_prog_name(cli)
+-
+-            try:
+-                cli.main(args=args or (), prog_name=prog_name, **extra)
+-            except SystemExit as e:
+-                exc_info = sys.exc_info()
+-                exit_code = e.code
+-                if exit_code is None:
+-                    exit_code = 0
+-
+-                if exit_code != 0:
+-                    exception = e
+-
+-                if not isinstance(exit_code, int):
+-                    sys.stdout.write(str(exit_code))
+-                    sys.stdout.write("\n")
+-                    exit_code = 1
+-
+-            except Exception as e:
+-                if not catch_exceptions:
+-                    raise
+-                exception = e
+-                exit_code = 1
+-                exc_info = sys.exc_info()
+-            finally:
+-                sys.stdout.flush()
+-                stdout = outstreams[0].getvalue()
+-                if self.mix_stderr:
+-                    stderr = None
+-                else:
+-                    stderr = outstreams[1].getvalue()
+-
+-        return Result(
+-            runner=self,
+-            stdout_bytes=stdout,
+-            stderr_bytes=stderr,
+-            exit_code=exit_code,
+-            exception=exception,
+-            exc_info=exc_info,
+-        )
+-
+-    @contextlib.contextmanager
+-    def isolated_filesystem(self):
+-        """A context manager that creates a temporary folder and changes
+-        the current working directory to it for isolated filesystem tests.
+-        """
+-        cwd = os.getcwd()
+-        t = tempfile.mkdtemp()
+-        os.chdir(t)
+-        try:
+-            yield t
+-        finally:
+-            os.chdir(cwd)
+-            try:
+-                shutil.rmtree(t)
+-            except OSError:  # noqa: B014
+-                pass
+diff --git a/dynaconf/vendor_src/click/types.py b/dynaconf/vendor_src/click/types.py
+deleted file mode 100644
+index 93cf701..0000000
+--- a/dynaconf/vendor_src/click/types.py
++++ /dev/null
+@@ -1,726 +0,0 @@
+-import os
+-import stat
+-from datetime import datetime
+-
+-from ._compat import _get_argv_encoding
+-from ._compat import filename_to_ui
+-from ._compat import get_filesystem_encoding
+-from ._compat import get_strerror
+-from ._compat import open_stream
+-from .exceptions import BadParameter
+-from .utils import LazyFile
+-from .utils import safecall
+-
+-
+-class ParamType:
+-    """Helper for converting values through types.  The following is
+-    necessary for a valid type:
+-
+-    *   it needs a name
+-    *   it needs to pass through None unchanged
+-    *   it needs to convert from a string
+-    *   it needs to convert its result type through unchanged
+-        (eg: needs to be idempotent)
+-    *   it needs to be able to deal with param and context being `None`.
+-        This can be the case when the object is used with prompt
+-        inputs.
+-    """
+-
+-    is_composite = False
+-
+-    #: the descriptive name of this type
+-    name = None
+-
+-    #: if a list of this type is expected and the value is pulled from a
+-    #: string environment variable, this is what splits it up.  `None`
+-    #: means any whitespace.  For all parameters the general rule is that
+-    #: whitespace splits them up.  The exception are paths and files which
+-    #: are split by ``os.path.pathsep`` by default (":" on Unix and ";" on
+-    #: Windows).
+-    envvar_list_splitter = None
+-
+-    def __call__(self, value, param=None, ctx=None):
+-        if value is not None:
+-            return self.convert(value, param, ctx)
+-
+-    def get_metavar(self, param):
+-        """Returns the metavar default for this param if it provides one."""
+-
+-    def get_missing_message(self, param):
+-        """Optionally might return extra information about a missing
+-        parameter.
+-
+-        .. versionadded:: 2.0
+-        """
+-
+-    def convert(self, value, param, ctx):
+-        """Converts the value.  This is not invoked for values that are
+-        `None` (the missing value).
+-        """
+-        return value
+-
+-    def split_envvar_value(self, rv):
+-        """Given a value from an environment variable this splits it up
+-        into small chunks depending on the defined envvar list splitter.
+-
+-        If the splitter is set to `None`, which means that whitespace splits,
+-        then leading and trailing whitespace is ignored.  Otherwise, leading
+-        and trailing splitters usually lead to empty items being included.
+-        """
+-        return (rv or "").split(self.envvar_list_splitter)
+-
+-    def fail(self, message, param=None, ctx=None):
+-        """Helper method to fail with an invalid value message."""
+-        raise BadParameter(message, ctx=ctx, param=param)
+-
+-
+-class CompositeParamType(ParamType):
+-    is_composite = True
+-
+-    @property
+-    def arity(self):
+-        raise NotImplementedError()
+-
+-
+-class FuncParamType(ParamType):
+-    def __init__(self, func):
+-        self.name = func.__name__
+-        self.func = func
+-
+-    def convert(self, value, param, ctx):
+-        try:
+-            return self.func(value)
+-        except ValueError:
+-            try:
+-                value = str(value)
+-            except UnicodeError:
+-                value = value.decode("utf-8", "replace")
+-
+-            self.fail(value, param, ctx)
+-
+-
+-class UnprocessedParamType(ParamType):
+-    name = "text"
+-
+-    def convert(self, value, param, ctx):
+-        return value
+-
+-    def __repr__(self):
+-        return "UNPROCESSED"
+-
+-
+-class StringParamType(ParamType):
+-    name = "text"
+-
+-    def convert(self, value, param, ctx):
+-        if isinstance(value, bytes):
+-            enc = _get_argv_encoding()
+-            try:
+-                value = value.decode(enc)
+-            except UnicodeError:
+-                fs_enc = get_filesystem_encoding()
+-                if fs_enc != enc:
+-                    try:
+-                        value = value.decode(fs_enc)
+-                    except UnicodeError:
+-                        value = value.decode("utf-8", "replace")
+-                else:
+-                    value = value.decode("utf-8", "replace")
+-            return value
+-        return value
+-
+-    def __repr__(self):
+-        return "STRING"
+-
+-
+-class Choice(ParamType):
+-    """The choice type allows a value to be checked against a fixed set
+-    of supported values. All of these values have to be strings.
+-
+-    You should only pass a list or tuple of choices. Other iterables
+-    (like generators) may lead to surprising results.
+-
+-    The resulting value will always be one of the originally passed choices
+-    regardless of ``case_sensitive`` or any ``ctx.token_normalize_func``
+-    being specified.
+-
+-    See :ref:`choice-opts` for an example.
+-
+-    :param case_sensitive: Set to false to make choices case
+-        insensitive. Defaults to true.
+-    """
+-
+-    name = "choice"
+-
+-    def __init__(self, choices, case_sensitive=True):
+-        self.choices = choices
+-        self.case_sensitive = case_sensitive
+-
+-    def get_metavar(self, param):
+-        return f"[{'|'.join(self.choices)}]"
+-
+-    def get_missing_message(self, param):
+-        choice_str = ",\n\t".join(self.choices)
+-        return f"Choose from:\n\t{choice_str}"
+-
+-    def convert(self, value, param, ctx):
+-        # Match through normalization and case sensitivity
+-        # first do token_normalize_func, then lowercase
+-        # preserve original `value` to produce an accurate message in
+-        # `self.fail`
+-        normed_value = value
+-        normed_choices = {choice: choice for choice in self.choices}
+-
+-        if ctx is not None and ctx.token_normalize_func is not None:
+-            normed_value = ctx.token_normalize_func(value)
+-            normed_choices = {
+-                ctx.token_normalize_func(normed_choice): original
+-                for normed_choice, original in normed_choices.items()
+-            }
+-
+-        if not self.case_sensitive:
+-            normed_value = normed_value.casefold()
+-            normed_choices = {
+-                normed_choice.casefold(): original
+-                for normed_choice, original in normed_choices.items()
+-            }
+-
+-        if normed_value in normed_choices:
+-            return normed_choices[normed_value]
+-
+-        self.fail(
+-            f"invalid choice: {value}. (choose from {', '.join(self.choices)})",
+-            param,
+-            ctx,
+-        )
+-
+-    def __repr__(self):
+-        return f"Choice({list(self.choices)})"
+-
+-
+-class DateTime(ParamType):
+-    """The DateTime type converts date strings into `datetime` objects.
+-
+-    The format strings which are checked are configurable, but default to some
+-    common (non-timezone aware) ISO 8601 formats.
+-
+-    When specifying *DateTime* formats, you should only pass a list or a tuple.
+-    Other iterables, like generators, may lead to surprising results.
+-
+-    The format strings are processed using ``datetime.strptime``, and this
+-    consequently defines the format strings which are allowed.
+-
+-    Parsing is tried using each format, in order, and the first format which
+-    parses successfully is used.
+-
+-    :param formats: A list or tuple of date format strings, in the order in
+-                    which they should be tried. Defaults to
+-                    ``'%Y-%m-%d'``, ``'%Y-%m-%dT%H:%M:%S'``,
+-                    ``'%Y-%m-%d %H:%M:%S'``.
+-    """
+-
+-    name = "datetime"
+-
+-    def __init__(self, formats=None):
+-        self.formats = formats or ["%Y-%m-%d", "%Y-%m-%dT%H:%M:%S", "%Y-%m-%d %H:%M:%S"]
+-
+-    def get_metavar(self, param):
+-        return f"[{'|'.join(self.formats)}]"
+-
+-    def _try_to_convert_date(self, value, format):
+-        try:
+-            return datetime.strptime(value, format)
+-        except ValueError:
+-            return None
+-
+-    def convert(self, value, param, ctx):
+-        # Exact match
+-        for format in self.formats:
+-            dtime = self._try_to_convert_date(value, format)
+-            if dtime:
+-                return dtime
+-
+-        self.fail(
+-            f"invalid datetime format: {value}. (choose from {', '.join(self.formats)})"
+-        )
+-
+-    def __repr__(self):
+-        return "DateTime"
+-
+-
+-class IntParamType(ParamType):
+-    name = "integer"
+-
+-    def convert(self, value, param, ctx):
+-        try:
+-            return int(value)
+-        except ValueError:
+-            self.fail(f"{value} is not a valid integer", param, ctx)
+-
+-    def __repr__(self):
+-        return "INT"
+-
+-
+-class IntRange(IntParamType):
+-    """A parameter that works similar to :data:`click.INT` but restricts
+-    the value to fit into a range.  The default behavior is to fail if the
+-    value falls outside the range, but it can also be silently clamped
+-    between the two edges.
+-
+-    See :ref:`ranges` for an example.
+-    """
+-
+-    name = "integer range"
+-
+-    def __init__(self, min=None, max=None, clamp=False):
+-        self.min = min
+-        self.max = max
+-        self.clamp = clamp
+-
+-    def convert(self, value, param, ctx):
+-        rv = IntParamType.convert(self, value, param, ctx)
+-        if self.clamp:
+-            if self.min is not None and rv < self.min:
+-                return self.min
+-            if self.max is not None and rv > self.max:
+-                return self.max
+-        if (
+-            self.min is not None
+-            and rv < self.min
+-            or self.max is not None
+-            and rv > self.max
+-        ):
+-            if self.min is None:
+-                self.fail(
+-                    f"{rv} is bigger than the maximum valid value {self.max}.",
+-                    param,
+-                    ctx,
+-                )
+-            elif self.max is None:
+-                self.fail(
+-                    f"{rv} is smaller than the minimum valid value {self.min}.",
+-                    param,
+-                    ctx,
+-                )
+-            else:
+-                self.fail(
+-                    f"{rv} is not in the valid range of {self.min} to {self.max}.",
+-                    param,
+-                    ctx,
+-                )
+-        return rv
+-
+-    def __repr__(self):
+-        return f"IntRange({self.min}, {self.max})"
+-
+-
+-class FloatParamType(ParamType):
+-    name = "float"
+-
+-    def convert(self, value, param, ctx):
+-        try:
+-            return float(value)
+-        except ValueError:
+-            self.fail(f"{value} is not a valid floating point value", param, ctx)
+-
+-    def __repr__(self):
+-        return "FLOAT"
+-
+-
+-class FloatRange(FloatParamType):
+-    """A parameter that works similar to :data:`click.FLOAT` but restricts
+-    the value to fit into a range.  The default behavior is to fail if the
+-    value falls outside the range, but it can also be silently clamped
+-    between the two edges.
+-
+-    See :ref:`ranges` for an example.
+-    """
+-
+-    name = "float range"
+-
+-    def __init__(self, min=None, max=None, clamp=False):
+-        self.min = min
+-        self.max = max
+-        self.clamp = clamp
+-
+-    def convert(self, value, param, ctx):
+-        rv = FloatParamType.convert(self, value, param, ctx)
+-        if self.clamp:
+-            if self.min is not None and rv < self.min:
+-                return self.min
+-            if self.max is not None and rv > self.max:
+-                return self.max
+-        if (
+-            self.min is not None
+-            and rv < self.min
+-            or self.max is not None
+-            and rv > self.max
+-        ):
+-            if self.min is None:
+-                self.fail(
+-                    f"{rv} is bigger than the maximum valid value {self.max}.",
+-                    param,
+-                    ctx,
+-                )
+-            elif self.max is None:
+-                self.fail(
+-                    f"{rv} is smaller than the minimum valid value {self.min}.",
+-                    param,
+-                    ctx,
+-                )
+-            else:
+-                self.fail(
+-                    f"{rv} is not in the valid range of {self.min} to {self.max}.",
+-                    param,
+-                    ctx,
+-                )
+-        return rv
+-
+-    def __repr__(self):
+-        return f"FloatRange({self.min}, {self.max})"
+-
+-
+-class BoolParamType(ParamType):
+-    name = "boolean"
+-
+-    def convert(self, value, param, ctx):
+-        if isinstance(value, bool):
+-            return bool(value)
+-        value = value.lower()
+-        if value in ("true", "t", "1", "yes", "y"):
+-            return True
+-        elif value in ("false", "f", "0", "no", "n"):
+-            return False
+-        self.fail(f"{value} is not a valid boolean", param, ctx)
+-
+-    def __repr__(self):
+-        return "BOOL"
+-
+-
+-class UUIDParameterType(ParamType):
+-    name = "uuid"
+-
+-    def convert(self, value, param, ctx):
+-        import uuid
+-
+-        try:
+-            return uuid.UUID(value)
+-        except ValueError:
+-            self.fail(f"{value} is not a valid UUID value", param, ctx)
+-
+-    def __repr__(self):
+-        return "UUID"
+-
+-
+-class File(ParamType):
+-    """Declares a parameter to be a file for reading or writing.  The file
+-    is automatically closed once the context tears down (after the command
+-    finished working).
+-
+-    Files can be opened for reading or writing.  The special value ``-``
+-    indicates stdin or stdout depending on the mode.
+-
+-    By default, the file is opened for reading text data, but it can also be
+-    opened in binary mode or for writing.  The encoding parameter can be used
+-    to force a specific encoding.
+-
+-    The `lazy` flag controls if the file should be opened immediately or upon
+-    first IO. The default is to be non-lazy for standard input and output
+-    streams as well as files opened for reading, `lazy` otherwise. When opening a
+-    file lazily for reading, it is still opened temporarily for validation, but
+-    will not be held open until first IO. lazy is mainly useful when opening
+-    for writing to avoid creating the file until it is needed.
+-
+-    Starting with Click 2.0, files can also be opened atomically in which
+-    case all writes go into a separate file in the same folder and upon
+-    completion the file will be moved over to the original location.  This
+-    is useful if a file regularly read by other users is modified.
+-
+-    See :ref:`file-args` for more information.
+-    """
+-
+-    name = "filename"
+-    envvar_list_splitter = os.path.pathsep
+-
+-    def __init__(
+-        self, mode="r", encoding=None, errors="strict", lazy=None, atomic=False
+-    ):
+-        self.mode = mode
+-        self.encoding = encoding
+-        self.errors = errors
+-        self.lazy = lazy
+-        self.atomic = atomic
+-
+-    def resolve_lazy_flag(self, value):
+-        if self.lazy is not None:
+-            return self.lazy
+-        if value == "-":
+-            return False
+-        elif "w" in self.mode:
+-            return True
+-        return False
+-
+-    def convert(self, value, param, ctx):
+-        try:
+-            if hasattr(value, "read") or hasattr(value, "write"):
+-                return value
+-
+-            lazy = self.resolve_lazy_flag(value)
+-
+-            if lazy:
+-                f = LazyFile(
+-                    value, self.mode, self.encoding, self.errors, atomic=self.atomic
+-                )
+-                if ctx is not None:
+-                    ctx.call_on_close(f.close_intelligently)
+-                return f
+-
+-            f, should_close = open_stream(
+-                value, self.mode, self.encoding, self.errors, atomic=self.atomic
+-            )
+-            # If a context is provided, we automatically close the file
+-            # at the end of the context execution (or flush out).  If a
+-            # context does not exist, it's the caller's responsibility to
+-            # properly close the file.  This for instance happens when the
+-            # type is used with prompts.
+-            if ctx is not None:
+-                if should_close:
+-                    ctx.call_on_close(safecall(f.close))
+-                else:
+-                    ctx.call_on_close(safecall(f.flush))
+-            return f
+-        except OSError as e:  # noqa: B014
+-            self.fail(
+-                f"Could not open file: {filename_to_ui(value)}: {get_strerror(e)}",
+-                param,
+-                ctx,
+-            )
+-
+-
+-class Path(ParamType):
+-    """The path type is similar to the :class:`File` type but it performs
+-    different checks.  First of all, instead of returning an open file
+-    handle it returns just the filename.  Secondly, it can perform various
+-    basic checks about what the file or directory should be.
+-
+-    .. versionchanged:: 6.0
+-       `allow_dash` was added.
+-
+-    :param exists: if set to true, the file or directory needs to exist for
+-                   this value to be valid.  If this is not required and a
+-                   file does indeed not exist, then all further checks are
+-                   silently skipped.
+-    :param file_okay: controls if a file is a possible value.
+-    :param dir_okay: controls if a directory is a possible value.
+-    :param writable: if true, a writable check is performed.
+-    :param readable: if true, a readable check is performed.
+-    :param resolve_path: if this is true, then the path is fully resolved
+-                         before the value is passed onwards.  This means
+-                         that it's absolute and symlinks are resolved.  It
+-                         will not expand a tilde-prefix, as this is
+-                         supposed to be done by the shell only.
+-    :param allow_dash: If this is set to `True`, a single dash to indicate
+-                       standard streams is permitted.
+-    :param path_type: optionally a string type that should be used to
+-                      represent the path.  The default is `None` which
+-                      means the return value will be either bytes or
+-                      unicode depending on what makes most sense given the
+-                      input data Click deals with.
+-    """
+-
+-    envvar_list_splitter = os.path.pathsep
+-
+-    def __init__(
+-        self,
+-        exists=False,
+-        file_okay=True,
+-        dir_okay=True,
+-        writable=False,
+-        readable=True,
+-        resolve_path=False,
+-        allow_dash=False,
+-        path_type=None,
+-    ):
+-        self.exists = exists
+-        self.file_okay = file_okay
+-        self.dir_okay = dir_okay
+-        self.writable = writable
+-        self.readable = readable
+-        self.resolve_path = resolve_path
+-        self.allow_dash = allow_dash
+-        self.type = path_type
+-
+-        if self.file_okay and not self.dir_okay:
+-            self.name = "file"
+-            self.path_type = "File"
+-        elif self.dir_okay and not self.file_okay:
+-            self.name = "directory"
+-            self.path_type = "Directory"
+-        else:
+-            self.name = "path"
+-            self.path_type = "Path"
+-
+-    def coerce_path_result(self, rv):
+-        if self.type is not None and not isinstance(rv, self.type):
+-            if self.type is str:
+-                rv = rv.decode(get_filesystem_encoding())
+-            else:
+-                rv = rv.encode(get_filesystem_encoding())
+-        return rv
+-
+-    def convert(self, value, param, ctx):
+-        rv = value
+-
+-        is_dash = self.file_okay and self.allow_dash and rv in (b"-", "-")
+-
+-        if not is_dash:
+-            if self.resolve_path:
+-                rv = os.path.realpath(rv)
+-
+-            try:
+-                st = os.stat(rv)
+-            except OSError:
+-                if not self.exists:
+-                    return self.coerce_path_result(rv)
+-                self.fail(
+-                    f"{self.path_type} {filename_to_ui(value)!r} does not exist.",
+-                    param,
+-                    ctx,
+-                )
+-
+-            if not self.file_okay and stat.S_ISREG(st.st_mode):
+-                self.fail(
+-                    f"{self.path_type} {filename_to_ui(value)!r} is a file.",
+-                    param,
+-                    ctx,
+-                )
+-            if not self.dir_okay and stat.S_ISDIR(st.st_mode):
+-                self.fail(
+-                    f"{self.path_type} {filename_to_ui(value)!r} is a directory.",
+-                    param,
+-                    ctx,
+-                )
+-            if self.writable and not os.access(value, os.W_OK):
+-                self.fail(
+-                    f"{self.path_type} {filename_to_ui(value)!r} is not writable.",
+-                    param,
+-                    ctx,
+-                )
+-            if self.readable and not os.access(value, os.R_OK):
+-                self.fail(
+-                    f"{self.path_type} {filename_to_ui(value)!r} is not readable.",
+-                    param,
+-                    ctx,
+-                )
+-
+-        return self.coerce_path_result(rv)
+-
+-
+-class Tuple(CompositeParamType):
+-    """The default behavior of Click is to apply a type on a value directly.
+-    This works well in most cases, except for when `nargs` is set to a fixed
+-    count and different types should be used for different items.  In this
+-    case the :class:`Tuple` type can be used.  This type can only be used
+-    if `nargs` is set to a fixed number.
+-
+-    For more information see :ref:`tuple-type`.
+-
+-    This can be selected by using a Python tuple literal as a type.
+-
+-    :param types: a list of types that should be used for the tuple items.
+-    """
+-
+-    def __init__(self, types):
+-        self.types = [convert_type(ty) for ty in types]
+-
+-    @property
+-    def name(self):
+-        return f"<{' '.join(ty.name for ty in self.types)}>"
+-
+-    @property
+-    def arity(self):
+-        return len(self.types)
+-
+-    def convert(self, value, param, ctx):
+-        if len(value) != len(self.types):
+-            raise TypeError(
+-                "It would appear that nargs is set to conflict with the"
+-                " composite type arity."
+-            )
+-        return tuple(ty(x, param, ctx) for ty, x in zip(self.types, value))
+-
+-
+-def convert_type(ty, default=None):
+-    """Converts a callable or python type into the most appropriate
+-    param type.
+-    """
+-    guessed_type = False
+-    if ty is None and default is not None:
+-        if isinstance(default, tuple):
+-            ty = tuple(map(type, default))
+-        else:
+-            ty = type(default)
+-        guessed_type = True
+-
+-    if isinstance(ty, tuple):
+-        return Tuple(ty)
+-    if isinstance(ty, ParamType):
+-        return ty
+-    if ty is str or ty is None:
+-        return STRING
+-    if ty is int:
+-        return INT
+-    # Booleans are only okay if not guessed.  This is done because for
+-    # flags the default value is actually a bit of a lie in that it
+-    # indicates which of the flags is the one we want.  See get_default()
+-    # for more information.
+-    if ty is bool and not guessed_type:
+-        return BOOL
+-    if ty is float:
+-        return FLOAT
+-    if guessed_type:
+-        return STRING
+-
+-    # Catch a common mistake
+-    if __debug__:
+-        try:
+-            if issubclass(ty, ParamType):
+-                raise AssertionError(
+-                    f"Attempted to use an uninstantiated parameter type ({ty})."
+-                )
+-        except TypeError:
+-            pass
+-    return FuncParamType(ty)
+-
+-
+-#: A dummy parameter type that just does nothing.  From a user's
+-#: perspective this appears to just be the same as `STRING` but
+-#: internally no string conversion takes place if the input was bytes.
+-#: This is usually useful when working with file paths as they can
+-#: appear in bytes and unicode.
+-#:
+-#: For path related uses the :class:`Path` type is a better choice but
+-#: there are situations where an unprocessed type is useful which is why
+-#: it is is provided.
+-#:
+-#: .. versionadded:: 4.0
+-UNPROCESSED = UnprocessedParamType()
+-
+-#: A unicode string parameter type which is the implicit default.  This
+-#: can also be selected by using ``str`` as type.
+-STRING = StringParamType()
+-
+-#: An integer parameter.  This can also be selected by using ``int`` as
+-#: type.
+-INT = IntParamType()
+-
+-#: A floating point value parameter.  This can also be selected by using
+-#: ``float`` as type.
+-FLOAT = FloatParamType()
+-
+-#: A boolean parameter.  This is the default for boolean flags.  This can
+-#: also be selected by using ``bool`` as a type.
+-BOOL = BoolParamType()
+-
+-#: A UUID parameter.
+-UUID = UUIDParameterType()
+diff --git a/dynaconf/vendor_src/click/utils.py b/dynaconf/vendor_src/click/utils.py
+deleted file mode 100644
+index bd9dd8e..0000000
+--- a/dynaconf/vendor_src/click/utils.py
++++ /dev/null
+@@ -1,440 +0,0 @@
+-import os
+-import sys
+-
+-from ._compat import _default_text_stderr
+-from ._compat import _default_text_stdout
+-from ._compat import _find_binary_writer
+-from ._compat import auto_wrap_for_ansi
+-from ._compat import binary_streams
+-from ._compat import filename_to_ui
+-from ._compat import get_filesystem_encoding
+-from ._compat import get_strerror
+-from ._compat import is_bytes
+-from ._compat import open_stream
+-from ._compat import should_strip_ansi
+-from ._compat import strip_ansi
+-from ._compat import text_streams
+-from ._compat import WIN
+-from .globals import resolve_color_default
+-
+-
+-echo_native_types = (str, bytes, bytearray)
+-
+-
+-def _posixify(name):
+-    return "-".join(name.split()).lower()
+-
+-
+-def safecall(func):
+-    """Wraps a function so that it swallows exceptions."""
+-
+-    def wrapper(*args, **kwargs):
+-        try:
+-            return func(*args, **kwargs)
+-        except Exception:
+-            pass
+-
+-    return wrapper
+-
+-
+-def make_str(value):
+-    """Converts a value into a valid string."""
+-    if isinstance(value, bytes):
+-        try:
+-            return value.decode(get_filesystem_encoding())
+-        except UnicodeError:
+-            return value.decode("utf-8", "replace")
+-    return str(value)
+-
+-
+-def make_default_short_help(help, max_length=45):
+-    """Return a condensed version of help string."""
+-    words = help.split()
+-    total_length = 0
+-    result = []
+-    done = False
+-
+-    for word in words:
+-        if word[-1:] == ".":
+-            done = True
+-        new_length = 1 + len(word) if result else len(word)
+-        if total_length + new_length > max_length:
+-            result.append("...")
+-            done = True
+-        else:
+-            if result:
+-                result.append(" ")
+-            result.append(word)
+-        if done:
+-            break
+-        total_length += new_length
+-
+-    return "".join(result)
+-
+-
+-class LazyFile:
+-    """A lazy file works like a regular file but it does not fully open
+-    the file but it does perform some basic checks early to see if the
+-    filename parameter does make sense.  This is useful for safely opening
+-    files for writing.
+-    """
+-
+-    def __init__(
+-        self, filename, mode="r", encoding=None, errors="strict", atomic=False
+-    ):
+-        self.name = filename
+-        self.mode = mode
+-        self.encoding = encoding
+-        self.errors = errors
+-        self.atomic = atomic
+-
+-        if filename == "-":
+-            self._f, self.should_close = open_stream(filename, mode, encoding, errors)
+-        else:
+-            if "r" in mode:
+-                # Open and close the file in case we're opening it for
+-                # reading so that we can catch at least some errors in
+-                # some cases early.
+-                open(filename, mode).close()
+-            self._f = None
+-            self.should_close = True
+-
+-    def __getattr__(self, name):
+-        return getattr(self.open(), name)
+-
+-    def __repr__(self):
+-        if self._f is not None:
+-            return repr(self._f)
+-        return f"<unopened file '{self.name}' {self.mode}>"
+-
+-    def open(self):
+-        """Opens the file if it's not yet open.  This call might fail with
+-        a :exc:`FileError`.  Not handling this error will produce an error
+-        that Click shows.
+-        """
+-        if self._f is not None:
+-            return self._f
+-        try:
+-            rv, self.should_close = open_stream(
+-                self.name, self.mode, self.encoding, self.errors, atomic=self.atomic
+-            )
+-        except OSError as e:  # noqa: E402
+-            from .exceptions import FileError
+-
+-            raise FileError(self.name, hint=get_strerror(e))
+-        self._f = rv
+-        return rv
+-
+-    def close(self):
+-        """Closes the underlying file, no matter what."""
+-        if self._f is not None:
+-            self._f.close()
+-
+-    def close_intelligently(self):
+-        """This function only closes the file if it was opened by the lazy
+-        file wrapper.  For instance this will never close stdin.
+-        """
+-        if self.should_close:
+-            self.close()
+-
+-    def __enter__(self):
+-        return self
+-
+-    def __exit__(self, exc_type, exc_value, tb):
+-        self.close_intelligently()
+-
+-    def __iter__(self):
+-        self.open()
+-        return iter(self._f)
+-
+-
+-class KeepOpenFile:
+-    def __init__(self, file):
+-        self._file = file
+-
+-    def __getattr__(self, name):
+-        return getattr(self._file, name)
+-
+-    def __enter__(self):
+-        return self
+-
+-    def __exit__(self, exc_type, exc_value, tb):
+-        pass
+-
+-    def __repr__(self):
+-        return repr(self._file)
+-
+-    def __iter__(self):
+-        return iter(self._file)
+-
+-
+-def echo(message=None, file=None, nl=True, err=False, color=None):
+-    """Prints a message plus a newline to the given file or stdout.  On
+-    first sight, this looks like the print function, but it has improved
+-    support for handling Unicode and binary data that does not fail no
+-    matter how badly configured the system is.
+-
+-    Primarily it means that you can print binary data as well as Unicode
+-    data on both 2.x and 3.x to the given file in the most appropriate way
+-    possible.  This is a very carefree function in that it will try its
+-    best to not fail.  As of Click 6.0 this includes support for unicode
+-    output on the Windows console.
+-
+-    In addition to that, if `colorama`_ is installed, the echo function will
+-    also support clever handling of ANSI codes.  Essentially it will then
+-    do the following:
+-
+-    -   add transparent handling of ANSI color codes on Windows.
+-    -   hide ANSI codes automatically if the destination file is not a
+-        terminal.
+-
+-    .. _colorama: https://pypi.org/project/colorama/
+-
+-    .. versionchanged:: 6.0
+-       As of Click 6.0 the echo function will properly support unicode
+-       output on the windows console.  Not that click does not modify
+-       the interpreter in any way which means that `sys.stdout` or the
+-       print statement or function will still not provide unicode support.
+-
+-    .. versionchanged:: 2.0
+-       Starting with version 2.0 of Click, the echo function will work
+-       with colorama if it's installed.
+-
+-    .. versionadded:: 3.0
+-       The `err` parameter was added.
+-
+-    .. versionchanged:: 4.0
+-       Added the `color` flag.
+-
+-    :param message: the message to print
+-    :param file: the file to write to (defaults to ``stdout``)
+-    :param err: if set to true the file defaults to ``stderr`` instead of
+-                ``stdout``.  This is faster and easier than calling
+-                :func:`get_text_stderr` yourself.
+-    :param nl: if set to `True` (the default) a newline is printed afterwards.
+-    :param color: controls if the terminal supports ANSI colors or not.  The
+-                  default is autodetection.
+-    """
+-    if file is None:
+-        if err:
+-            file = _default_text_stderr()
+-        else:
+-            file = _default_text_stdout()
+-
+-    # Convert non bytes/text into the native string type.
+-    if message is not None and not isinstance(message, echo_native_types):
+-        message = str(message)
+-
+-    if nl:
+-        message = message or ""
+-        if isinstance(message, str):
+-            message += "\n"
+-        else:
+-            message += b"\n"
+-
+-    # If there is a message and the value looks like bytes, we manually
+-    # need to find the binary stream and write the message in there.
+-    # This is done separately so that most stream types will work as you
+-    # would expect. Eg: you can write to StringIO for other cases.
+-    if message and is_bytes(message):
+-        binary_file = _find_binary_writer(file)
+-        if binary_file is not None:
+-            file.flush()
+-            binary_file.write(message)
+-            binary_file.flush()
+-            return
+-
+-    # ANSI-style support.  If there is no message or we are dealing with
+-    # bytes nothing is happening.  If we are connected to a file we want
+-    # to strip colors.  If we are on windows we either wrap the stream
+-    # to strip the color or we use the colorama support to translate the
+-    # ansi codes to API calls.
+-    if message and not is_bytes(message):
+-        color = resolve_color_default(color)
+-        if should_strip_ansi(file, color):
+-            message = strip_ansi(message)
+-        elif WIN:
+-            if auto_wrap_for_ansi is not None:
+-                file = auto_wrap_for_ansi(file)
+-            elif not color:
+-                message = strip_ansi(message)
+-
+-    if message:
+-        file.write(message)
+-    file.flush()
+-
+-
+-def get_binary_stream(name):
+-    """Returns a system stream for byte processing.
+-
+-    :param name: the name of the stream to open.  Valid names are ``'stdin'``,
+-                 ``'stdout'`` and ``'stderr'``
+-    """
+-    opener = binary_streams.get(name)
+-    if opener is None:
+-        raise TypeError(f"Unknown standard stream '{name}'")
+-    return opener()
+-
+-
+-def get_text_stream(name, encoding=None, errors="strict"):
+-    """Returns a system stream for text processing.  This usually returns
+-    a wrapped stream around a binary stream returned from
+-    :func:`get_binary_stream` but it also can take shortcuts for already
+-    correctly configured streams.
+-
+-    :param name: the name of the stream to open.  Valid names are ``'stdin'``,
+-                 ``'stdout'`` and ``'stderr'``
+-    :param encoding: overrides the detected default encoding.
+-    :param errors: overrides the default error mode.
+-    """
+-    opener = text_streams.get(name)
+-    if opener is None:
+-        raise TypeError(f"Unknown standard stream '{name}'")
+-    return opener(encoding, errors)
+-
+-
+-def open_file(
+-    filename, mode="r", encoding=None, errors="strict", lazy=False, atomic=False
+-):
+-    """This is similar to how the :class:`File` works but for manual
+-    usage.  Files are opened non lazy by default.  This can open regular
+-    files as well as stdin/stdout if ``'-'`` is passed.
+-
+-    If stdin/stdout is returned the stream is wrapped so that the context
+-    manager will not close the stream accidentally.  This makes it possible
+-    to always use the function like this without having to worry to
+-    accidentally close a standard stream::
+-
+-        with open_file(filename) as f:
+-            ...
+-
+-    .. versionadded:: 3.0
+-
+-    :param filename: the name of the file to open (or ``'-'`` for stdin/stdout).
+-    :param mode: the mode in which to open the file.
+-    :param encoding: the encoding to use.
+-    :param errors: the error handling for this file.
+-    :param lazy: can be flipped to true to open the file lazily.
+-    :param atomic: in atomic mode writes go into a temporary file and it's
+-                   moved on close.
+-    """
+-    if lazy:
+-        return LazyFile(filename, mode, encoding, errors, atomic=atomic)
+-    f, should_close = open_stream(filename, mode, encoding, errors, atomic=atomic)
+-    if not should_close:
+-        f = KeepOpenFile(f)
+-    return f
+-
+-
+-def get_os_args():
+-    """Returns the argument part of ``sys.argv``, removing the first
+-    value which is the name of the script.
+-
+-    .. deprecated:: 8.0
+-        Will be removed in 8.1. Access ``sys.argv[1:]`` directly
+-        instead.
+-    """
+-    import warnings
+-
+-    warnings.warn(
+-        "'get_os_args' is deprecated and will be removed in 8.1. Access"
+-        " 'sys.argv[1:]' directly instead.",
+-        DeprecationWarning,
+-        stacklevel=2,
+-    )
+-    return sys.argv[1:]
+-
+-
+-def format_filename(filename, shorten=False):
+-    """Formats a filename for user display.  The main purpose of this
+-    function is to ensure that the filename can be displayed at all.  This
+-    will decode the filename to unicode if necessary in a way that it will
+-    not fail.  Optionally, it can shorten the filename to not include the
+-    full path to the filename.
+-
+-    :param filename: formats a filename for UI display.  This will also convert
+-                     the filename into unicode without failing.
+-    :param shorten: this optionally shortens the filename to strip of the
+-                    path that leads up to it.
+-    """
+-    if shorten:
+-        filename = os.path.basename(filename)
+-    return filename_to_ui(filename)
+-
+-
+-def get_app_dir(app_name, roaming=True, force_posix=False):
+-    r"""Returns the config folder for the application.  The default behavior
+-    is to return whatever is most appropriate for the operating system.
+-
+-    To give you an idea, for an app called ``"Foo Bar"``, something like
+-    the following folders could be returned:
+-
+-    Mac OS X:
+-      ``~/Library/Application Support/Foo Bar``
+-    Mac OS X (POSIX):
+-      ``~/.foo-bar``
+-    Unix:
+-      ``~/.config/foo-bar``
+-    Unix (POSIX):
+-      ``~/.foo-bar``
+-    Win XP (roaming):
+-      ``C:\Documents and Settings\<user>\Local Settings\Application Data\Foo Bar``
+-    Win XP (not roaming):
+-      ``C:\Documents and Settings\<user>\Application Data\Foo Bar``
+-    Win 7 (roaming):
+-      ``C:\Users\<user>\AppData\Roaming\Foo Bar``
+-    Win 7 (not roaming):
+-      ``C:\Users\<user>\AppData\Local\Foo Bar``
+-
+-    .. versionadded:: 2.0
+-
+-    :param app_name: the application name.  This should be properly capitalized
+-                     and can contain whitespace.
+-    :param roaming: controls if the folder should be roaming or not on Windows.
+-                    Has no affect otherwise.
+-    :param force_posix: if this is set to `True` then on any POSIX system the
+-                        folder will be stored in the home folder with a leading
+-                        dot instead of the XDG config home or darwin's
+-                        application support folder.
+-    """
+-    if WIN:
+-        key = "APPDATA" if roaming else "LOCALAPPDATA"
+-        folder = os.environ.get(key)
+-        if folder is None:
+-            folder = os.path.expanduser("~")
+-        return os.path.join(folder, app_name)
+-    if force_posix:
+-        return os.path.join(os.path.expanduser(f"~/.{_posixify(app_name)}"))
+-    if sys.platform == "darwin":
+-        return os.path.join(
+-            os.path.expanduser("~/Library/Application Support"), app_name
+-        )
+-    return os.path.join(
+-        os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")),
+-        _posixify(app_name),
+-    )
+-
+-
+-class PacifyFlushWrapper:
+-    """This wrapper is used to catch and suppress BrokenPipeErrors resulting
+-    from ``.flush()`` being called on broken pipe during the shutdown/final-GC
+-    of the Python interpreter. Notably ``.flush()`` is always called on
+-    ``sys.stdout`` and ``sys.stderr``. So as to have minimal impact on any
+-    other cleanup code, and the case where the underlying file is not a broken
+-    pipe, all calls and attributes are proxied.
+-    """
+-
+-    def __init__(self, wrapped):
+-        self.wrapped = wrapped
+-
+-    def flush(self):
+-        try:
+-            self.wrapped.flush()
+-        except OSError as e:
+-            import errno
+-
+-            if e.errno != errno.EPIPE:
+-                raise
+-
+-    def __getattr__(self, attr):
+-        return getattr(self.wrapped, attr)
+diff --git a/dynaconf/vendor_src/dotenv/README.md b/dynaconf/vendor_src/dotenv/README.md
+deleted file mode 100644
+index 94a816f..0000000
+--- a/dynaconf/vendor_src/dotenv/README.md
++++ /dev/null
+@@ -1,6 +0,0 @@
+-## python-bodotenv
+-
+-Vendored dep taken from: https://github.com/theskumar/python-dotenv
+-Licensed under BSD: https://github.com/theskumar/python-dotenv/blob/master/LICENSE
+-
+-Current version: 0.13.0
+diff --git a/dynaconf/vendor_src/dotenv/__init__.py b/dynaconf/vendor_src/dotenv/__init__.py
+deleted file mode 100644
+index b88d9bc..0000000
+--- a/dynaconf/vendor_src/dotenv/__init__.py
++++ /dev/null
+@@ -1,46 +0,0 @@
+-from .compat import IS_TYPE_CHECKING
+-from .main import load_dotenv, get_key, set_key, unset_key, find_dotenv, dotenv_values
+-
+-if IS_TYPE_CHECKING:
+-    from typing import Any, Optional
+-
+-
+-def load_ipython_extension(ipython):
+-    # type: (Any) -> None
+-    from .ipython import load_ipython_extension
+-    load_ipython_extension(ipython)
+-
+-
+-def get_cli_string(path=None, action=None, key=None, value=None, quote=None):
+-    # type: (Optional[str], Optional[str], Optional[str], Optional[str], Optional[str]) -> str
+-    """Returns a string suitable for running as a shell script.
+-
+-    Useful for converting a arguments passed to a fabric task
+-    to be passed to a `local` or `run` command.
+-    """
+-    command = ['dotenv']
+-    if quote:
+-        command.append('-q %s' % quote)
+-    if path:
+-        command.append('-f %s' % path)
+-    if action:
+-        command.append(action)
+-        if key:
+-            command.append(key)
+-            if value:
+-                if ' ' in value:
+-                    command.append('"%s"' % value)
+-                else:
+-                    command.append(value)
+-
+-    return ' '.join(command).strip()
+-
+-
+-__all__ = ['get_cli_string',
+-           'load_dotenv',
+-           'dotenv_values',
+-           'get_key',
+-           'set_key',
+-           'unset_key',
+-           'find_dotenv',
+-           'load_ipython_extension']
+diff --git a/dynaconf/vendor_src/dotenv/cli.py b/dynaconf/vendor_src/dotenv/cli.py
+deleted file mode 100644
+index 269b093..0000000
+--- a/dynaconf/vendor_src/dotenv/cli.py
++++ /dev/null
+@@ -1,145 +0,0 @@
+-import os
+-import sys
+-from subprocess import Popen
+-
+-try:
+-    from dynaconf.vendor import click
+-except ImportError:
+-    sys.stderr.write('It seems python-dotenv is not installed with cli option. \n'
+-                     'Run pip install "python-dotenv[cli]" to fix this.')
+-    sys.exit(1)
+-
+-from .compat import IS_TYPE_CHECKING, to_env
+-from .main import dotenv_values, get_key, set_key, unset_key
+-from .version import __version__
+-
+-if IS_TYPE_CHECKING:
+-    from typing import Any, List, Dict
+-
+-
+-@click.group()
+-@click.option('-f', '--file', default=os.path.join(os.getcwd(), '.env'),
+-              type=click.Path(exists=True),
+-              help="Location of the .env file, defaults to .env file in current working directory.")
+-@click.option('-q', '--quote', default='always',
+-              type=click.Choice(['always', 'never', 'auto']),
+-              help="Whether to quote or not the variable values. Default mode is always. This does not affect parsing.")
+-@click.version_option(version=__version__)
+-@click.pass_context
+-def cli(ctx, file, quote):
+-    # type: (click.Context, Any, Any) -> None
+-    '''This script is used to set, get or unset values from a .env file.'''
+-    ctx.obj = {}
+-    ctx.obj['FILE'] = file
+-    ctx.obj['QUOTE'] = quote
+-
+-
+-@cli.command()
+-@click.pass_context
+-def list(ctx):
+-    # type: (click.Context) -> None
+-    '''Display all the stored key/value.'''
+-    file = ctx.obj['FILE']
+-    dotenv_as_dict = dotenv_values(file)
+-    for k, v in dotenv_as_dict.items():
+-        click.echo('%s=%s' % (k, v))
+-
+-
+-@cli.command()
+-@click.pass_context
+-@click.argument('key', required=True)
+-@click.argument('value', required=True)
+-def set(ctx, key, value):
+-    # type: (click.Context, Any, Any) -> None
+-    '''Store the given key/value.'''
+-    file = ctx.obj['FILE']
+-    quote = ctx.obj['QUOTE']
+-    success, key, value = set_key(file, key, value, quote)
+-    if success:
+-        click.echo('%s=%s' % (key, value))
+-    else:
+-        exit(1)
+-
+-
+-@cli.command()
+-@click.pass_context
+-@click.argument('key', required=True)
+-def get(ctx, key):
+-    # type: (click.Context, Any) -> None
+-    '''Retrieve the value for the given key.'''
+-    file = ctx.obj['FILE']
+-    stored_value = get_key(file, key)
+-    if stored_value:
+-        click.echo('%s=%s' % (key, stored_value))
+-    else:
+-        exit(1)
+-
+-
+-@cli.command()
+-@click.pass_context
+-@click.argument('key', required=True)
+-def unset(ctx, key):
+-    # type: (click.Context, Any) -> None
+-    '''Removes the given key.'''
+-    file = ctx.obj['FILE']
+-    quote = ctx.obj['QUOTE']
+-    success, key = unset_key(file, key, quote)
+-    if success:
+-        click.echo("Successfully removed %s" % key)
+-    else:
+-        exit(1)
+-
+-
+-@cli.command(context_settings={'ignore_unknown_options': True})
+-@click.pass_context
+-@click.argument('commandline', nargs=-1, type=click.UNPROCESSED)
+-def run(ctx, commandline):
+-    # type: (click.Context, List[str]) -> None
+-    """Run command with environment variables present."""
+-    file = ctx.obj['FILE']
+-    dotenv_as_dict = {to_env(k): to_env(v) for (k, v) in dotenv_values(file).items() if v is not None}
+-
+-    if not commandline:
+-        click.echo('No command given.')
+-        exit(1)
+-    ret = run_command(commandline, dotenv_as_dict)
+-    exit(ret)
+-
+-
+-def run_command(command, env):
+-    # type: (List[str], Dict[str, str]) -> int
+-    """Run command in sub process.
+-
+-    Runs the command in a sub process with the variables from `env`
+-    added in the current environment variables.
+-
+-    Parameters
+-    ----------
+-    command: List[str]
+-        The command and it's parameters
+-    env: Dict
+-        The additional environment variables
+-
+-    Returns
+-    -------
+-    int
+-        The return code of the command
+-
+-    """
+-    # copy the current environment variables and add the vales from
+-    # `env`
+-    cmd_env = os.environ.copy()
+-    cmd_env.update(env)
+-
+-    p = Popen(command,
+-              universal_newlines=True,
+-              bufsize=0,
+-              shell=False,
+-              env=cmd_env)
+-    _, _ = p.communicate()
+-
+-    return p.returncode
+-
+-
+-if __name__ == "__main__":
+-    cli()
+diff --git a/dynaconf/vendor_src/dotenv/compat.py b/dynaconf/vendor_src/dotenv/compat.py
+deleted file mode 100644
+index f8089bf..0000000
+--- a/dynaconf/vendor_src/dotenv/compat.py
++++ /dev/null
+@@ -1,49 +0,0 @@
+-import sys
+-
+-PY2 = sys.version_info[0] == 2  # type: bool
+-
+-if PY2:
+-    from StringIO import StringIO  # noqa
+-else:
+-    from io import StringIO  # noqa
+-
+-
+-def is_type_checking():
+-    # type: () -> bool
+-    try:
+-        from typing import TYPE_CHECKING
+-    except ImportError:
+-        return False
+-    return TYPE_CHECKING
+-
+-
+-IS_TYPE_CHECKING = is_type_checking()
+-
+-
+-if IS_TYPE_CHECKING:
+-    from typing import Text
+-
+-
+-def to_env(text):
+-    # type: (Text) -> str
+-    """
+-    Encode a string the same way whether it comes from the environment or a `.env` file.
+-    """
+-    if PY2:
+-        return text.encode(sys.getfilesystemencoding() or "utf-8")
+-    else:
+-        return text
+-
+-
+-def to_text(string):
+-    # type: (str) -> Text
+-    """
+-    Make a string Unicode if it isn't already.
+-
+-    This is useful for defining raw unicode strings because `ur"foo"` isn't valid in
+-    Python 3.
+-    """
+-    if PY2:
+-        return string.decode("utf-8")
+-    else:
+-        return string
+diff --git a/dynaconf/vendor_src/dotenv/ipython.py b/dynaconf/vendor_src/dotenv/ipython.py
+deleted file mode 100644
+index 7f1b13d..0000000
+--- a/dynaconf/vendor_src/dotenv/ipython.py
++++ /dev/null
+@@ -1,41 +0,0 @@
+-from __future__ import print_function
+-
+-from IPython.core.magic import Magics, line_magic, magics_class  # type: ignore
+-from IPython.core.magic_arguments import (argument, magic_arguments,  # type: ignore
+-                                          parse_argstring)  # type: ignore
+-
+-from .main import find_dotenv, load_dotenv
+-
+-
+-@magics_class
+-class IPythonDotEnv(Magics):
+-
+-    @magic_arguments()
+-    @argument(
+-        '-o', '--override', action='store_true',
+-        help="Indicate to override existing variables"
+-    )
+-    @argument(
+-        '-v', '--verbose', action='store_true',
+-        help="Indicate function calls to be verbose"
+-    )
+-    @argument('dotenv_path', nargs='?', type=str, default='.env',
+-              help='Search in increasingly higher folders for the `dotenv_path`')
+-    @line_magic
+-    def dotenv(self, line):
+-        args = parse_argstring(self.dotenv, line)
+-        # Locate the .env file
+-        dotenv_path = args.dotenv_path
+-        try:
+-            dotenv_path = find_dotenv(dotenv_path, True, True)
+-        except IOError:
+-            print("cannot find .env file")
+-            return
+-
+-        # Load the .env file
+-        load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)
+-
+-
+-def load_ipython_extension(ipython):
+-    """Register the %dotenv magic."""
+-    ipython.register_magics(IPythonDotEnv)
+diff --git a/dynaconf/vendor_src/dotenv/main.py b/dynaconf/vendor_src/dotenv/main.py
+deleted file mode 100644
+index c821ef7..0000000
+--- a/dynaconf/vendor_src/dotenv/main.py
++++ /dev/null
+@@ -1,323 +0,0 @@
+-# -*- coding: utf-8 -*-
+-from __future__ import absolute_import, print_function, unicode_literals
+-
+-import io
+-import logging
+-import os
+-import re
+-import shutil
+-import sys
+-import tempfile
+-from collections import OrderedDict
+-from contextlib import contextmanager
+-
+-from .compat import IS_TYPE_CHECKING, PY2, StringIO, to_env
+-from .parser import Binding, parse_stream
+-
+-logger = logging.getLogger(__name__)
+-
+-if IS_TYPE_CHECKING:
+-    from typing import (
+-        Dict, Iterator, Match, Optional, Pattern, Union, Text, IO, Tuple
+-    )
+-    if sys.version_info >= (3, 6):
+-        _PathLike = os.PathLike
+-    else:
+-        _PathLike = Text
+-
+-    if sys.version_info >= (3, 0):
+-        _StringIO = StringIO
+-    else:
+-        _StringIO = StringIO[Text]
+-
+-__posix_variable = re.compile(
+-    r"""
+-    \$\{
+-        (?P<name>[^\}:]*)
+-        (?::-
+-            (?P<default>[^\}]*)
+-        )?
+-    \}
+-    """,
+-    re.VERBOSE,
+-)  # type: Pattern[Text]
+-
+-
+-def with_warn_for_invalid_lines(mappings):
+-    # type: (Iterator[Binding]) -> Iterator[Binding]
+-    for mapping in mappings:
+-        if mapping.error:
+-            logger.warning(
+-                "Python-dotenv could not parse statement starting at line %s",
+-                mapping.original.line,
+-            )
+-        yield mapping
+-
+-
+-class DotEnv():
+-
+-    def __init__(self, dotenv_path, verbose=False, encoding=None, interpolate=True):
+-        # type: (Union[Text, _PathLike, _StringIO], bool, Union[None, Text], bool) -> None
+-        self.dotenv_path = dotenv_path  # type: Union[Text,_PathLike, _StringIO]
+-        self._dict = None  # type: Optional[Dict[Text, Optional[Text]]]
+-        self.verbose = verbose  # type: bool
+-        self.encoding = encoding  # type: Union[None, Text]
+-        self.interpolate = interpolate  # type: bool
+-
+-    @contextmanager
+-    def _get_stream(self):
+-        # type: () -> Iterator[IO[Text]]
+-        if isinstance(self.dotenv_path, StringIO):
+-            yield self.dotenv_path
+-        elif os.path.isfile(self.dotenv_path):
+-            with io.open(self.dotenv_path, encoding=self.encoding) as stream:
+-                yield stream
+-        else:
+-            if self.verbose:
+-                logger.info("Python-dotenv could not find configuration file %s.", self.dotenv_path or '.env')
+-            yield StringIO('')
+-
+-    def dict(self):
+-        # type: () -> Dict[Text, Optional[Text]]
+-        """Return dotenv as dict"""
+-        if self._dict:
+-            return self._dict
+-
+-        values = OrderedDict(self.parse())
+-        self._dict = resolve_nested_variables(values) if self.interpolate else values
+-        return self._dict
+-
+-    def parse(self):
+-        # type: () -> Iterator[Tuple[Text, Optional[Text]]]
+-        with self._get_stream() as stream:
+-            for mapping in with_warn_for_invalid_lines(parse_stream(stream)):
+-                if mapping.key is not None:
+-                    yield mapping.key, mapping.value
+-
+-    def set_as_environment_variables(self, override=False):
+-        # type: (bool) -> bool
+-        """
+-        Load the current dotenv as system environemt variable.
+-        """
+-        for k, v in self.dict().items():
+-            if k in os.environ and not override:
+-                continue
+-            if v is not None:
+-                os.environ[to_env(k)] = to_env(v)
+-
+-        return True
+-
+-    def get(self, key):
+-        # type: (Text) -> Optional[Text]
+-        """
+-        """
+-        data = self.dict()
+-
+-        if key in data:
+-            return data[key]
+-
+-        if self.verbose:
+-            logger.warning("Key %s not found in %s.", key, self.dotenv_path)
+-
+-        return None
+-
+-
+-def get_key(dotenv_path, key_to_get):
+-    # type: (Union[Text, _PathLike], Text) -> Optional[Text]
+-    """
+-    Gets the value of a given key from the given .env
+-
+-    If the .env path given doesn't exist, fails
+-    """
+-    return DotEnv(dotenv_path, verbose=True).get(key_to_get)
+-
+-
+-@contextmanager
+-def rewrite(path):
+-    # type: (_PathLike) -> Iterator[Tuple[IO[Text], IO[Text]]]
+-    try:
+-        with tempfile.NamedTemporaryFile(mode="w+", delete=False) as dest:
+-            with io.open(path) as source:
+-                yield (source, dest)  # type: ignore
+-    except BaseException:
+-        if os.path.isfile(dest.name):
+-            os.unlink(dest.name)
+-        raise
+-    else:
+-        shutil.move(dest.name, path)
+-
+-
+-def set_key(dotenv_path, key_to_set, value_to_set, quote_mode="always"):
+-    # type: (_PathLike, Text, Text, Text) -> Tuple[Optional[bool], Text, Text]
+-    """
+-    Adds or Updates a key/value to the given .env
+-
+-    If the .env path given doesn't exist, fails instead of risking creating
+-    an orphan .env somewhere in the filesystem
+-    """
+-    value_to_set = value_to_set.strip("'").strip('"')
+-    if not os.path.exists(dotenv_path):
+-        logger.warning("Can't write to %s - it doesn't exist.", dotenv_path)
+-        return None, key_to_set, value_to_set
+-
+-    if " " in value_to_set:
+-        quote_mode = "always"
+-
+-    if quote_mode == "always":
+-        value_out = '"{}"'.format(value_to_set.replace('"', '\\"'))
+-    else:
+-        value_out = value_to_set
+-    line_out = "{}={}\n".format(key_to_set, value_out)
+-
+-    with rewrite(dotenv_path) as (source, dest):
+-        replaced = False
+-        for mapping in with_warn_for_invalid_lines(parse_stream(source)):
+-            if mapping.key == key_to_set:
+-                dest.write(line_out)
+-                replaced = True
+-            else:
+-                dest.write(mapping.original.string)
+-        if not replaced:
+-            dest.write(line_out)
+-
+-    return True, key_to_set, value_to_set
+-
+-
+-def unset_key(dotenv_path, key_to_unset, quote_mode="always"):
+-    # type: (_PathLike, Text, Text) -> Tuple[Optional[bool], Text]
+-    """
+-    Removes a given key from the given .env
+-
+-    If the .env path given doesn't exist, fails
+-    If the given key doesn't exist in the .env, fails
+-    """
+-    if not os.path.exists(dotenv_path):
+-        logger.warning("Can't delete from %s - it doesn't exist.", dotenv_path)
+-        return None, key_to_unset
+-
+-    removed = False
+-    with rewrite(dotenv_path) as (source, dest):
+-        for mapping in with_warn_for_invalid_lines(parse_stream(source)):
+-            if mapping.key == key_to_unset:
+-                removed = True
+-            else:
+-                dest.write(mapping.original.string)
+-
+-    if not removed:
+-        logger.warning("Key %s not removed from %s - key doesn't exist.", key_to_unset, dotenv_path)
+-        return None, key_to_unset
+-
+-    return removed, key_to_unset
+-
+-
+-def resolve_nested_variables(values):
+-    # type: (Dict[Text, Optional[Text]]) -> Dict[Text, Optional[Text]]
+-    def _replacement(name, default):
+-        # type: (Text, Optional[Text]) -> Text
+-        """
+-        get appropriate value for a variable name.
+-        first search in environ, if not found,
+-        then look into the dotenv variables
+-        """
+-        default = default if default is not None else ""
+-        ret = os.getenv(name, new_values.get(name, default))
+-        return ret  # type: ignore
+-
+-    def _re_sub_callback(match):
+-        # type: (Match[Text]) -> Text
+-        """
+-        From a match object gets the variable name and returns
+-        the correct replacement
+-        """
+-        matches = match.groupdict()
+-        return _replacement(name=matches["name"], default=matches["default"])  # type: ignore
+-
+-    new_values = {}
+-
+-    for k, v in values.items():
+-        new_values[k] = __posix_variable.sub(_re_sub_callback, v) if v is not None else None
+-
+-    return new_values
+-
+-
+-def _walk_to_root(path):
+-    # type: (Text) -> Iterator[Text]
+-    """
+-    Yield directories starting from the given directory up to the root
+-    """
+-    if not os.path.exists(path):
+-        raise IOError('Starting path not found')
+-
+-    if os.path.isfile(path):
+-        path = os.path.dirname(path)
+-
+-    last_dir = None
+-    current_dir = os.path.abspath(path)
+-    while last_dir != current_dir:
+-        yield current_dir
+-        parent_dir = os.path.abspath(os.path.join(current_dir, os.path.pardir))
+-        last_dir, current_dir = current_dir, parent_dir
+-
+-
+-def find_dotenv(filename='.env', raise_error_if_not_found=False, usecwd=False):
+-    # type: (Text, bool, bool) -> Text
+-    """
+-    Search in increasingly higher folders for the given file
+-
+-    Returns path to the file if found, or an empty string otherwise
+-    """
+-
+-    def _is_interactive():
+-        """ Decide whether this is running in a REPL or IPython notebook """
+-        main = __import__('__main__', None, None, fromlist=['__file__'])
+-        return not hasattr(main, '__file__')
+-
+-    if usecwd or _is_interactive() or getattr(sys, 'frozen', False):
+-        # Should work without __file__, e.g. in REPL or IPython notebook.
+-        path = os.getcwd()
+-    else:
+-        # will work for .py files
+-        frame = sys._getframe()
+-        # find first frame that is outside of this file
+-        if PY2 and not __file__.endswith('.py'):
+-            # in Python2 __file__ extension could be .pyc or .pyo (this doesn't account
+-            # for edge case of Python compiled for non-standard extension)
+-            current_file = __file__.rsplit('.', 1)[0] + '.py'
+-        else:
+-            current_file = __file__
+-
+-        while frame.f_code.co_filename == current_file:
+-            assert frame.f_back is not None
+-            frame = frame.f_back
+-        frame_filename = frame.f_code.co_filename
+-        path = os.path.dirname(os.path.abspath(frame_filename))
+-
+-    for dirname in _walk_to_root(path):
+-        check_path = os.path.join(dirname, filename)
+-        if os.path.isfile(check_path):
+-            return check_path
+-
+-    if raise_error_if_not_found:
+-        raise IOError('File not found')
+-
+-    return ''
+-
+-
+-def load_dotenv(dotenv_path=None, stream=None, verbose=False, override=False, interpolate=True, **kwargs):
+-    # type: (Union[Text, _PathLike, None], Optional[_StringIO], bool, bool, bool, Union[None, Text]) -> bool
+-    """Parse a .env file and then load all the variables found as environment variables.
+-
+-    - *dotenv_path*: absolute or relative path to .env file.
+-    - *stream*: `StringIO` object with .env content.
+-    - *verbose*: whether to output the warnings related to missing .env file etc. Defaults to `False`.
+-    - *override*: where to override the system environment variables with the variables in `.env` file.
+-                  Defaults to `False`.
+-    """
+-    f = dotenv_path or stream or find_dotenv()
+-    return DotEnv(f, verbose=verbose, interpolate=interpolate, **kwargs).set_as_environment_variables(override=override)
+-
+-
+-def dotenv_values(dotenv_path=None, stream=None, verbose=False, interpolate=True, **kwargs):
+-    # type: (Union[Text, _PathLike, None], Optional[_StringIO], bool, bool, Union[None, Text]) -> Dict[Text, Optional[Text]]  # noqa: E501
+-    f = dotenv_path or stream or find_dotenv()
+-    return DotEnv(f, verbose=verbose, interpolate=interpolate, **kwargs).dict()
+diff --git a/dynaconf/vendor_src/dotenv/parser.py b/dynaconf/vendor_src/dotenv/parser.py
+deleted file mode 100644
+index 2c93cbd..0000000
+--- a/dynaconf/vendor_src/dotenv/parser.py
++++ /dev/null
+@@ -1,237 +0,0 @@
+-import codecs
+-import re
+-
+-from .compat import IS_TYPE_CHECKING, to_text
+-
+-if IS_TYPE_CHECKING:
+-    from typing import (  # noqa:F401
+-        IO, Iterator, Match, NamedTuple, Optional, Pattern, Sequence, Text,
+-        Tuple
+-    )
+-
+-
+-def make_regex(string, extra_flags=0):
+-    # type: (str, int) -> Pattern[Text]
+-    return re.compile(to_text(string), re.UNICODE | extra_flags)
+-
+-
+-_newline = make_regex(r"(\r\n|\n|\r)")
+-_multiline_whitespace = make_regex(r"\s*", extra_flags=re.MULTILINE)
+-_whitespace = make_regex(r"[^\S\r\n]*")
+-_export = make_regex(r"(?:export[^\S\r\n]+)?")
+-_single_quoted_key = make_regex(r"'([^']+)'")
+-_unquoted_key = make_regex(r"([^=\#\s]+)")
+-_equal_sign = make_regex(r"(=[^\S\r\n]*)")
+-_single_quoted_value = make_regex(r"'((?:\\'|[^'])*)'")
+-_double_quoted_value = make_regex(r'"((?:\\"|[^"])*)"')
+-_unquoted_value_part = make_regex(r"([^ \r\n]*)")
+-_comment = make_regex(r"(?:[^\S\r\n]*#[^\r\n]*)?")
+-_end_of_line = make_regex(r"[^\S\r\n]*(?:\r\n|\n|\r|$)")
+-_rest_of_line = make_regex(r"[^\r\n]*(?:\r|\n|\r\n)?")
+-_double_quote_escapes = make_regex(r"\\[\\'\"abfnrtv]")
+-_single_quote_escapes = make_regex(r"\\[\\']")
+-
+-
+-try:
+-    # this is necessary because we only import these from typing
+-    # when we are type checking, and the linter is upset if we
+-    # re-import
+-    import typing
+-
+-    Original = typing.NamedTuple(
+-        "Original",
+-        [
+-            ("string", typing.Text),
+-            ("line", int),
+-        ],
+-    )
+-
+-    Binding = typing.NamedTuple(
+-        "Binding",
+-        [
+-            ("key", typing.Optional[typing.Text]),
+-            ("value", typing.Optional[typing.Text]),
+-            ("original", Original),
+-            ("error", bool),
+-        ],
+-    )
+-except ImportError:
+-    from collections import namedtuple
+-    Original = namedtuple(  # type: ignore
+-        "Original",
+-        [
+-            "string",
+-            "line",
+-        ],
+-    )
+-    Binding = namedtuple(  # type: ignore
+-        "Binding",
+-        [
+-            "key",
+-            "value",
+-            "original",
+-            "error",
+-        ],
+-    )
+-
+-
+-class Position:
+-    def __init__(self, chars, line):
+-        # type: (int, int) -> None
+-        self.chars = chars
+-        self.line = line
+-
+-    @classmethod
+-    def start(cls):
+-        # type: () -> Position
+-        return cls(chars=0, line=1)
+-
+-    def set(self, other):
+-        # type: (Position) -> None
+-        self.chars = other.chars
+-        self.line = other.line
+-
+-    def advance(self, string):
+-        # type: (Text) -> None
+-        self.chars += len(string)
+-        self.line += len(re.findall(_newline, string))
+-
+-
+-class Error(Exception):
+-    pass
+-
+-
+-class Reader:
+-    def __init__(self, stream):
+-        # type: (IO[Text]) -> None
+-        self.string = stream.read()
+-        self.position = Position.start()
+-        self.mark = Position.start()
+-
+-    def has_next(self):
+-        # type: () -> bool
+-        return self.position.chars < len(self.string)
+-
+-    def set_mark(self):
+-        # type: () -> None
+-        self.mark.set(self.position)
+-
+-    def get_marked(self):
+-        # type: () -> Original
+-        return Original(
+-            string=self.string[self.mark.chars:self.position.chars],
+-            line=self.mark.line,
+-        )
+-
+-    def peek(self, count):
+-        # type: (int) -> Text
+-        return self.string[self.position.chars:self.position.chars + count]
+-
+-    def read(self, count):
+-        # type: (int) -> Text
+-        result = self.string[self.position.chars:self.position.chars + count]
+-        if len(result) < count:
+-            raise Error("read: End of string")
+-        self.position.advance(result)
+-        return result
+-
+-    def read_regex(self, regex):
+-        # type: (Pattern[Text]) -> Sequence[Text]
+-        match = regex.match(self.string, self.position.chars)
+-        if match is None:
+-            raise Error("read_regex: Pattern not found")
+-        self.position.advance(self.string[match.start():match.end()])
+-        return match.groups()
+-
+-
+-def decode_escapes(regex, string):
+-    # type: (Pattern[Text], Text) -> Text
+-    def decode_match(match):
+-        # type: (Match[Text]) -> Text
+-        return codecs.decode(match.group(0), 'unicode-escape')  # type: ignore
+-
+-    return regex.sub(decode_match, string)
+-
+-
+-def parse_key(reader):
+-    # type: (Reader) -> Optional[Text]
+-    char = reader.peek(1)
+-    if char == "#":
+-        return None
+-    elif char == "'":
+-        (key,) = reader.read_regex(_single_quoted_key)
+-    else:
+-        (key,) = reader.read_regex(_unquoted_key)
+-    return key
+-
+-
+-def parse_unquoted_value(reader):
+-    # type: (Reader) -> Text
+-    value = u""
+-    while True:
+-        (part,) = reader.read_regex(_unquoted_value_part)
+-        value += part
+-        after = reader.peek(2)
+-        if len(after) < 2 or after[0] in u"\r\n" or after[1] in u" #\r\n":
+-            return value
+-        value += reader.read(2)
+-
+-
+-def parse_value(reader):
+-    # type: (Reader) -> Text
+-    char = reader.peek(1)
+-    if char == u"'":
+-        (value,) = reader.read_regex(_single_quoted_value)
+-        return decode_escapes(_single_quote_escapes, value)
+-    elif char == u'"':
+-        (value,) = reader.read_regex(_double_quoted_value)
+-        return decode_escapes(_double_quote_escapes, value)
+-    elif char in (u"", u"\n", u"\r"):
+-        return u""
+-    else:
+-        return parse_unquoted_value(reader)
+-
+-
+-def parse_binding(reader):
+-    # type: (Reader) -> Binding
+-    reader.set_mark()
+-    try:
+-        reader.read_regex(_multiline_whitespace)
+-        if not reader.has_next():
+-            return Binding(
+-                key=None,
+-                value=None,
+-                original=reader.get_marked(),
+-                error=False,
+-            )
+-        reader.read_regex(_export)
+-        key = parse_key(reader)
+-        reader.read_regex(_whitespace)
+-        if reader.peek(1) == "=":
+-            reader.read_regex(_equal_sign)
+-            value = parse_value(reader)  # type: Optional[Text]
+-        else:
+-            value = None
+-        reader.read_regex(_comment)
+-        reader.read_regex(_end_of_line)
+-        return Binding(
+-            key=key,
+-            value=value,
+-            original=reader.get_marked(),
+-            error=False,
+-        )
+-    except Error:
+-        reader.read_regex(_rest_of_line)
+-        return Binding(
+-            key=None,
+-            value=None,
+-            original=reader.get_marked(),
+-            error=True,
+-        )
+-
+-
+-def parse_stream(stream):
+-    # type: (IO[Text]) -> Iterator[Binding]
+-    reader = Reader(stream)
+-    while reader.has_next():
+-        yield parse_binding(reader)
+diff --git a/dynaconf/vendor_src/dotenv/py.typed b/dynaconf/vendor_src/dotenv/py.typed
+deleted file mode 100644
+index 7632ecf..0000000
+--- a/dynaconf/vendor_src/dotenv/py.typed
++++ /dev/null
+@@ -1 +0,0 @@
+-# Marker file for PEP 561
+diff --git a/dynaconf/vendor_src/dotenv/version.py b/dynaconf/vendor_src/dotenv/version.py
+deleted file mode 100644
+index f23a6b3..0000000
+--- a/dynaconf/vendor_src/dotenv/version.py
++++ /dev/null
+@@ -1 +0,0 @@
+-__version__ = "0.13.0"
+diff --git a/dynaconf/vendor_src/ruamel/__init__.py b/dynaconf/vendor_src/ruamel/__init__.py
+deleted file mode 100644
+index e69de29..0000000
+diff --git a/dynaconf/vendor_src/ruamel/yaml/CHANGES b/dynaconf/vendor_src/ruamel/yaml/CHANGES
+deleted file mode 100644
+index a70a8ef..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/CHANGES
++++ /dev/null
+@@ -1,957 +0,0 @@
+-[0, 16, 10]: 2020-02-12
+-  - (auto) updated image references in README to sourceforge
+-
+-[0, 16, 9]: 2020-02-11
+-  - update CHANGES
+-
+-[0, 16, 8]: 2020-02-11
+-  - update requirements so that ruamel.yaml.clib is installed for 3.8,
+-    as it has become available (via manylinux builds)
+-
+-[0, 16, 7]: 2020-01-30
+-  - fix typchecking issue on TaggedScalar (reported by Jens Nielsen)
+-  - fix error in dumping literal scalar in sequence with comments before element
+-    (reported by `EJ Etherington <https://sourceforge.net/u/ejether/>`__)
+-
+-[0, 16, 6]: 2020-01-20
+-  - fix empty string mapping key roundtripping with preservation of quotes as `? ''`
+-    (reported via email by Tomer Aharoni).
+-  - fix incorrect state setting in class constructor (reported by `Douglas Raillard
+-    <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__)
+-  - adjust deprecation warning test for Hashable, as that no longer warns (reported
+-    by `Jason Montleon <https://bitbucket.org/%7B8f377d12-8d5b-4069-a662-00a2674fee4e%7D/>`__)
+-
+-[0, 16, 5]: 2019-08-18
+-  - allow for ``YAML(typ=['unsafe', 'pytypes'])``
+-
+-[0, 16, 4]: 2019-08-16
+-  - fix output of TAG directives with # (reported by `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-
+-[0, 16, 3]: 2019-08-15
+-  - move setting of version based on YAML directive to scanner, allowing to
+-    check for file version during TAG directive scanning
+-
+-[0, 16, 2]: 2019-08-15
+-  - preserve YAML and TAG directives on roundtrip, correctly output #
+-    in URL for YAML 1.2 (both reported by `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-[0, 16, 1]: 2019-08-08
+-  - Force the use of new version of ruamel.yaml.clib (reported by `Alex Joz
+-    <https://bitbucket.org/%7B9af55900-2534-4212-976c-61339b6ffe14%7D/>`__)
+-  - Allow '#' in tag URI as these are allowed in YAML 1.2 (reported by
+-    `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-[0, 16, 0]: 2019-07-25
+-  - split of C source that generates .so file to ruamel.yaml.clib
+-  - duplicate keys are now an error when working with the old API as well
+-
+-[0, 15, 100]: 2019-07-17
+-  - fixing issue with dumping deep-copied data from commented YAML, by
+-    providing both the memo parameter to __deepcopy__, and by allowing
+-    startmarks to be compared on their content (reported by `Theofilos
+-    Petsios
+-    <https://bitbucket.org/%7Be550bc5d-403d-4fda-820b-bebbe71796d3%7D/>`__)
+-
+-[0, 15, 99]: 2019-07-12
+-  - add `py.typed` to distribution, based on a PR submitted by
+-    `Michael Crusoe
+-    <https://bitbucket.org/%7Bc9fbde69-e746-48f5-900d-34992b7860c8%7D/>`__
+-  - merge PR 40 (also by Michael Crusoe) to more accurately specify
+-    repository in the README (also reported in a misunderstood issue
+-    some time ago)
+-
+-[0, 15, 98]: 2019-07-09
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.12, needed
+-    for Python 3.8.0b2 (reported by `John Vandenberg
+-    <https://bitbucket.org/%7B6d4e8487-3c97-4dab-a060-088ec50c682c%7D/>`__)
+-
+-[0, 15, 97]: 2019-06-06
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.10, needed for
+-    Python 3.8.0b1
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.9, needed for
+-    Python 3.8.0a4 (reported by `Anthony Sottile
+-    <https://bitbucket.org/%7B569cc8ea-0d9e-41cb-94a4-19ea517324df%7D/>`__)
+-
+-[0, 15, 96]: 2019-05-16
+-  - fix failure to indent comments on round-trip anchored block style
+-    scalars in block sequence (reported by `William Kimball
+-    <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-
+-[0, 15, 95]: 2019-05-16
+-  - fix failure to round-trip anchored scalars in block sequence
+-    (reported by `William Kimball
+-    <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-  - wheel files for Python 3.4 no longer provided (`Python 3.4 EOL 2019-03-18
+-    <https://www.python.org/dev/peps/pep-0429/>`__)
+-
+-[0, 15, 94]: 2019-04-23
+-  - fix missing line-break after end-of-file comments not ending in
+-    line-break (reported by `Philip Thompson
+-    <https://bitbucket.org/%7Be42ba205-0876-4151-bcbe-ccaea5bd13ce%7D/>`__)
+-
+-[0, 15, 93]: 2019-04-21
+-  - fix failure to parse empty implicit flow mapping key
+-  - in YAML 1.1 plains scalars `y`, 'n', `Y`, and 'N' are now
+-    correctly recognised as booleans and such strings dumped quoted
+-    (reported by `Marcel Bollmann
+-    <https://bitbucket.org/%7Bd8850921-9145-4ad0-ac30-64c3bd9b036d%7D/>`__)
+-
+-[0, 15, 92]: 2019-04-16
+-  - fix failure to parse empty implicit block mapping key (reported by 
+-    `Nolan W <https://bitbucket.org/i2labs/>`__)
+-
+-[0, 15, 91]: 2019-04-05
+-  - allowing duplicate keys would not work for merge keys (reported by mamacdon on
+-    `StackOverflow <https://stackoverflow.com/questions/55540686/>`__ 
+-
+-[0, 15, 90]: 2019-04-04
+-  - fix issue with updating `CommentedMap` from list of tuples (reported by 
+-    `Peter Henry <https://bitbucket.org/mosbasik/>`__)
+-
+-[0, 15, 89]: 2019-02-27
+-  - fix for items with flow-mapping in block sequence output on single line
+-    (reported by `Zahari Dim <https://bitbucket.org/zahari_dim/>`__)
+-  - fix for safe dumping erroring in creation of representereror when dumping namedtuple
+-    (reported and solution by `Jaakko Kantojärvi <https://bitbucket.org/raphendyr/>`__)
+-
+-[0, 15, 88]: 2019-02-12
+-  - fix inclusing of python code from the subpackage data (containing extra tests,
+-    reported by `Florian Apolloner <https://bitbucket.org/apollo13/>`__)
+-
+-[0, 15, 87]: 2019-01-22
+-  - fix problem with empty lists and the code to reinsert merge keys (reported via email 
+-    by Zaloo)
+-
+-[0, 15, 86]: 2019-01-16
+-  - reinsert merge key in its old position (reported by grumbler on
+-    <Stackoverflow <https://stackoverflow.com/a/54206512/1307905>`__)
+-  - fix for issue with non-ASCII anchor names (reported and fix
+-    provided by Dandaleon Flux via email)
+-  - fix for issue when parsing flow mapping value starting with colon (in pure Python only)
+-    (reported by `FichteFoll <https://bitbucket.org/FichteFoll/>`__)
+-
+-[0, 15, 85]: 2019-01-08
+-  - the types used by `SafeConstructor` for mappings and sequences can
+-    now by set by assigning to `XXXConstructor.yaml_base_dict_type`
+-    (and `..._list_type`), preventing the need to copy two methods
+-    with 50+ lines that had `var = {}` hardcoded.  (Implemented to
+-    help solve an feature request by `Anthony Sottile
+-    <https://bitbucket.org/asottile/>`__ in an easier way)
+-
+-[0, 15, 84]: 2019-01-07
+-  - fix for `CommentedMap.copy()` not returning `CommentedMap`, let alone copying comments etc.
+-    (reported by `Anthony Sottile <https://bitbucket.org/asottile/>`__)
+-
+-[0, 15, 83]: 2019-01-02
+-  - fix for bug in roundtripping aliases used as key (reported via email by Zaloo)
+-
+-[0, 15, 82]: 2018-12-28
+-  - anchors and aliases on scalar int, float, string and bool are now preserved. Anchors
+-    do not need a referring alias for these (reported by 
+-    `Alex Harvey <https://bitbucket.org/alexharv074/>`__)
+-  - anchors no longer lost on tagged objects when roundtripping (reported by `Zaloo 
+-    <https://bitbucket.org/zaloo/>`__)
+-
+-[0, 15, 81]: 2018-12-06
+- - fix issue saving methods of metaclass derived classes (reported and fix provided
+-   by `Douglas Raillard <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-[0, 15, 80]: 2018-11-26
+- - fix issue emitting BEL character when round-tripping invalid folded input
+-   (reported by Isaac on `StackOverflow <https://stackoverflow.com/a/53471217/1307905>`__)
+-    
+-[0, 15, 79]: 2018-11-21
+-  - fix issue with anchors nested deeper than alias (reported by gaFF on
+-    `StackOverflow <https://stackoverflow.com/a/53397781/1307905>`__)
+-
+-[0, 15, 78]: 2018-11-15
+-  - fix setup issue for 3.8 (reported by `Sidney Kuyateh 
+-    <https://bitbucket.org/autinerd/>`__)
+-
+-[0, 15, 77]: 2018-11-09
+-  - setting `yaml.sort_base_mapping_type_on_output = False`, will prevent
+-    explicit sorting by keys in the base representer of mappings. Roundtrip
+-    already did not do this. Usage only makes real sense for Python 3.6+
+-    (feature request by `Sebastian Gerber <https://bitbucket.org/spacemanspiff2007/>`__).
+-  - implement Python version check in YAML metadata in ``_test/test_z_data.py``
+-
+-[0, 15, 76]: 2018-11-01
+-  - fix issue with empty mapping and sequence loaded as flow-style
+-    (mapping reported by `Min RK <https://bitbucket.org/minrk/>`__, sequence
+-    by `Maged Ahmed <https://bitbucket.org/maged2/>`__)
+-
+-[0, 15, 75]: 2018-10-27
+-  - fix issue with single '?' scalar (reported by `Terrance 
+-    <https://bitbucket.org/OllieTerrance/>`__)
+-  - fix issue with duplicate merge keys (prompted by `answering 
+-    <https://stackoverflow.com/a/52852106/1307905>`__ a 
+-    `StackOverflow question <https://stackoverflow.com/q/52851168/1307905>`__
+-    by `math <https://stackoverflow.com/users/1355634/math>`__)
+-
+-[0, 15, 74]: 2018-10-17
+-  - fix dropping of comment on rt before sequence item that is sequence item
+-    (reported by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-
+-[0, 15, 73]: 2018-10-16
+-  - fix irregular output on pre-comment in sequence within sequence (reported
+-    by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-  - allow non-compact (i.e. next line) dumping sequence/mapping within sequence.
+-
+-[0, 15, 72]: 2018-10-06
+-  - fix regression on explicit 1.1 loading with the C based scanner/parser
+-    (reported by `Tomas Vavra <https://bitbucket.org/xtomik/>`__)
+-
+-[0, 15, 71]: 2018-09-26
+-  - fix regression where handcrafted CommentedMaps could not be initiated (reported by 
+-    `Dan Helfman <https://bitbucket.org/dhelfman/>`__)
+-  - fix regression with non-root literal scalars that needed indent indicator
+-    (reported by `Clark Breyman <https://bitbucket.org/clarkbreyman/>`__)
+-  - tag:yaml.org,2002:python/object/apply now also uses __qualname__ on PY3
+-    (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-[0, 15, 70]: 2018-09-21
+-  - reverted CommentedMap and CommentedSeq to subclass ordereddict resp. list,
+-    reimplemented merge maps so that both ``dict(**commented_map_instance)`` and JSON
+-    dumping works. This also allows checking with ``isinstance()`` on ``dict`` resp. ``list``.
+-    (Proposed by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__, with feedback
+-    from `blhsing <https://stackoverflow.com/users/6890912/blhsing>`__ on
+-    `StackOverflow <https://stackoverflow.com/q/52314186/1307905>`__)
+-
+-[0, 15, 69]: 2018-09-20
+-  - fix issue with dump_all gobbling end-of-document comments on parsing
+-    (reported by `Pierre B. <https://bitbucket.org/octplane/>`__)
+-
+-[0, 15, 68]: 2018-09-20
+-  - fix issue with parsabel, but incorrect output with nested flow-style sequences
+-    (reported by `Dougal Seeley <https://bitbucket.org/dseeley/>`__)
+-  - fix issue with loading Python objects that have __setstate__ and recursion in parameters
+-    (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-[0, 15, 67]: 2018-09-19
+-  - fix issue with extra space inserted with non-root literal strings 
+-    (Issue reported and PR with fix provided by 
+-    `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-
+-[0, 15, 66]: 2018-09-07
+-  - fix issue with fold indicating characters inserted in safe_load-ed folded strings
+-    (reported by `Maximilian Hils <https://bitbucket.org/mhils/>`__).
+-
+-[0, 15, 65]: 2018-09-07
+-  - fix issue #232 revert to throw ParserError for unexcpected ``]``
+-    and ``}`` instead of IndexError. (Issue reported and PR with fix
+-    provided by `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-  - added ``key`` and ``reverse`` parameter (suggested by Jannik Klemm via email)
+-  - indent root level literal scalars that have directive or document end markers
+-    at the beginning of a line
+-
+-[0, 15, 64]: 2018-08-30
+-  - support round-trip of tagged sequences: ``!Arg [a, {b: 1}]``
+-  - single entry mappings in flow sequences now written by default without quotes
+-    set ``yaml.brace_single_entry_mapping_in_flow_sequence=True`` to force
+-    getting ``[a, {b: 1}, {c: {d: 2}}]`` instead of the default ``[a, b: 1, c: {d: 2}]``
+-  - fix issue when roundtripping floats starting with a dot such as ``.5``
+-    (reported by `Harrison Gregg <https://bitbucket.org/HarrisonGregg/>`__)
+-
+-[0, 15, 63]: 2018-08-29
+-  - small fix only necessary for Windows users that don't use wheels.
+-
+-[0, 15, 62]: 2018-08-29
+-  - C based reader/scanner & emitter now allow setting of 1.2 as YAML version.
+-    ** The loading/dumping is still YAML 1.1 code**, so use the common subset of
+-    YAML 1.2 and 1.1 (reported by `Ge Yang <https://bitbucket.org/yangge/>`__)
+-
+-[0, 15, 61]: 2018-08-23
+-  - support for round-tripping folded style scalars (initially requested 
+-    by `Johnathan Viduchinsky <https://bitbucket.org/johnathanvidu/>`__)
+-  - update of C code
+-  - speed up of scanning (~30% depending on the input)
+-
+-[0, 15, 60]: 2018-08-18
+-  - cleanup for mypy 
+-  - spurious print in library (reported by 
+-    `Lele Gaifax <https://bitbucket.org/lele/>`__), now automatically checked 
+-
+-[0, 15, 59]: 2018-08-17
+-  - issue with C based loader and leading zeros (reported by 
+-    `Tom Hamilton Stubber <https://bitbucket.org/TomHamiltonStubber/>`__)
+-
+-[0, 15, 58]: 2018-08-17
+-  - simple mappings can now be used as keys when round-tripping::
+-
+-      {a: 1, b: 2}: hello world
+-      
+-    although using the obvious operations (del, popitem) on the key will
+-    fail, you can mutilate it by going through its attributes. If you load the
+-    above YAML in `d`, then changing the value is cumbersome:
+-
+-        d = {CommentedKeyMap([('a', 1), ('b', 2)]): "goodbye"}
+-
+-    and changing the key even more so:
+-
+-        d[CommentedKeyMap([('b', 1), ('a', 2)])] = d.pop(
+-                     CommentedKeyMap([('a', 1), ('b', 2)]))
+-
+-    (you can use a `dict` instead of a list of tuples (or ordereddict), but that might result
+-    in a different order, of the keys of the key, in the output)
+-  - check integers to dump with 1.2 patterns instead of 1.1 (reported by 
+-    `Lele Gaifax <https://bitbucket.org/lele/>`__)
+-  
+-
+-[0, 15, 57]: 2018-08-15
+-  - Fix that CommentedSeq could no longer be used in adding or do a copy
+-    (reported by `Christopher Wright <https://bitbucket.org/CJ-Wright4242/>`__)
+-
+-[0, 15, 56]: 2018-08-15
+-  - fix issue with ``python -O`` optimizing away code (reported, and detailed cause
+-    pinpointed, by `Alex Grönholm <https://bitbucket.org/agronholm/>`__
+-
+-[0, 15, 55]: 2018-08-14
+-
+-  - unmade ``CommentedSeq`` a subclass of ``list``. It is now
+-    indirectly a subclass of the standard
+-    ``collections.abc.MutableSequence`` (without .abc if you are
+-    still on Python2.7). If you do ``isinstance(yaml.load('[1, 2]'),
+-    list)``) anywhere in your code replace ``list`` with
+-    ``MutableSequence``.  Directly, ``CommentedSeq`` is a subclass of
+-    the abstract baseclass ``ruamel.yaml.compat.MutableScliceableSequence``,
+-    with the result that *(extended) slicing is supported on 
+-    ``CommentedSeq``*.
+-    (reported by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__)
+-  - duplicate keys (or their values) with non-ascii now correctly
+-    report in Python2, instead of raising a Unicode error.
+-    (Reported by `Jonathan Pyle <https://bitbucket.org/jonathan_pyle/>`__)
+-
+-[0, 15, 54]: 2018-08-13
+-
+-  - fix issue where a comment could pop-up twice in the output (reported by 
+-    `Mike Kazantsev <https://bitbucket.org/mk_fg/>`__ and by 
+-    `Nate Peterson <https://bitbucket.org/ndpete21/>`__)
+-  - fix issue where JSON object (mapping) without spaces was not parsed
+-    properly (reported by `Marc Schmidt <https://bitbucket.org/marcj/>`__)
+-  - fix issue where comments after empty flow-style mappings were not emitted
+-    (reported by `Qinfench Chen <https://bitbucket.org/flyin5ish/>`__)
+-
+-[0, 15, 53]: 2018-08-12
+-  - fix issue with flow style mapping with comments gobbled newline (reported
+-    by `Christopher Lambert <https://bitbucket.org/XN137/>`__)
+-  - fix issue where single '+' under YAML 1.2 was interpreted as
+-    integer, erroring out (reported by `Jethro Yu
+-    <https://bitbucket.org/jcppkkk/>`__)
+-
+-[0, 15, 52]: 2018-08-09
+-  - added `.copy()` mapping representation for round-tripping
+-    (``CommentedMap``) to fix incomplete copies of merged mappings
+-    (reported by `Will Richards
+-    <https://bitbucket.org/will_richards/>`__) 
+-  - Also unmade that class a subclass of ordereddict to solve incorrect behaviour
+-    for ``{**merged-mapping}`` and ``dict(**merged-mapping)`` (reported by
+-    `Filip Matzner <https://bitbucket.org/FloopCZ/>`__)
+-
+-[0, 15, 51]: 2018-08-08
+-  - Fix method name dumps (were not dotted) and loads (reported by `Douglas Raillard 
+-    <https://bitbucket.org/DouglasRaillard/>`__)
+-  - Fix spurious trailing white-space caused when the comment start
+-    column was no longer reached and there was no actual EOL comment
+-    (e.g. following empty line) and doing substitutions, or when
+-    quotes around scalars got dropped.  (reported by `Thomas Guillet
+-    <https://bitbucket.org/guillett/>`__)
+-
+-[0, 15, 50]: 2018-08-05
+-  - Allow ``YAML()`` as a context manager for output, thereby making it much easier
+-    to generate multi-documents in a stream. 
+-  - Fix issue with incorrect type information for `load()` and `dump()` (reported 
+-    by `Jimbo Jim <https://bitbucket.org/jimbo1qaz/>`__)
+-
+-[0, 15, 49]: 2018-08-05
+-  - fix preservation of leading newlines in root level literal style scalar,
+-    and preserve comment after literal style indicator (``|  # some comment``)
+-    Both needed for round-tripping multi-doc streams in 
+-    `ryd <https://pypi.org/project/ryd/>`__.
+-
+-[0, 15, 48]: 2018-08-03
+-  - housekeeping: ``oitnb`` for formatting, mypy 0.620 upgrade and conformity
+-
+-[0, 15, 47]: 2018-07-31
+-  - fix broken 3.6 manylinux1 (result of an unclean ``build`` (reported by 
+-    `Roman Sichnyi <https://bitbucket.org/rsichnyi-gl/>`__)
+-
+-
+-[0, 15, 46]: 2018-07-29
+-  - fixed DeprecationWarning for importing from ``collections`` on 3.7
+-    (issue 210, reported by `Reinoud Elhorst
+-    <https://bitbucket.org/reinhrst/>`__). It was `difficult to find
+-    why tox/pytest did not report
+-    <https://stackoverflow.com/q/51573204/1307905>`__ and as time
+-    consuming to actually `fix
+-    <https://stackoverflow.com/a/51573205/1307905>`__ the tests.
+-
+-[0, 15, 45]: 2018-07-26
+-  - After adding failing test for ``YAML.load_all(Path())``, remove StopIteration 
+-    (PR provided by `Zachary Buhman <https://bitbucket.org/buhman/>`__,
+-    also reported by `Steven Hiscocks <https://bitbucket.org/sdhiscocks/>`__.
+-
+-[0, 15, 44]: 2018-07-14
+-  - Correct loading plain scalars consisting of numerals only and
+-    starting with `0`, when not explicitly specifying YAML version
+-    1.1. This also fixes the issue about dumping string `'019'` as
+-    plain scalars as reported by `Min RK
+-    <https://bitbucket.org/minrk/>`__, that prompted this chance.
+-
+-[0, 15, 43]: 2018-07-12
+-  - merge PR33: Python2.7 on Windows is narrow, but has no
+-    ``sysconfig.get_config_var('Py_UNICODE_SIZE')``. (merge provided by
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__)
+-  - ``register_class()`` now returns class (proposed by
+-    `Mike Nerone <https://bitbucket.org/Manganeez/>`__}
+-
+-[0, 15, 42]: 2018-07-01
+-  - fix regression showing only on narrow Python 2.7 (py27mu) builds
+-    (with help from
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__ and
+-    `Colm O'Connor <>`__).
+-  - run pre-commit ``tox`` on Python 2.7 wide and narrow, as well as
+-    3.4/3.5/3.6/3.7/pypy
+-
+-[0, 15, 41]: 2018-06-27
+-  - add detection of C-compile failure (investigation prompted by 
+-    `StackOverlow <https://stackoverflow.com/a/51057399/1307905>`__ by 
+-    `Emmanuel Blot <https://stackoverflow.com/users/8233409/emmanuel-blot>`__),
+-    which was removed while no longer dependent on ``libyaml``, C-extensions
+-    compilation still needs a compiler though.
+-
+-[0, 15, 40]: 2018-06-18
+-  - added links to landing places as suggested in issue 190 by
+-    `KostisA <https://bitbucket.org/ankostis/>`__
+-  - fixes issue #201: decoding unicode escaped tags on Python2, reported
+-    by `Dan Abolafia <https://bitbucket.org/danabo/>`__
+-
+-[0, 15, 39]: 2018-06-16
+-  - merge PR27 improving package startup time (and loading when regexp not 
+-    actually used), provided by 
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__
+-
+-[0, 15, 38]: 2018-06-13
+-  - fix for losing precision when roundtripping floats by
+-    `Rolf Wojtech <https://bitbucket.org/asomov/>`__
+-  - fix for hardcoded dir separator not working for Windows by
+-    `Nuno André <https://bitbucket.org/nu_no/>`__
+-  - typo fix by `Andrey Somov <https://bitbucket.org/asomov/>`__
+-
+-[0, 15, 37]: 2018-03-21
+-  - again trying to create installable files for 187
+-
+-[0, 15, 36]: 2018-02-07
+-  - fix issue 187, incompatibility of C extension with 3.7 (reported by
+-    Daniel Blanchard)
+-
+-[0, 15, 35]: 2017-12-03
+-  - allow ``None`` as stream when specifying ``transform`` parameters to
+-    ``YAML.dump()``.
+-    This is useful if the transforming function doesn't return a meaningful value
+-    (inspired by `StackOverflow <https://stackoverflow.com/q/47614862/1307905>`__ by
+-    `rsaw <https://stackoverflow.com/users/406281/rsaw>`__).
+-
+-[0, 15, 34]: 2017-09-17
+-  - fix for issue 157: CDumper not dumping floats (reported by Jan Smitka)
+-
+-[0, 15, 33]: 2017-08-31
+-  - support for "undefined" round-tripping tagged scalar objects (in addition to
+-    tagged mapping object). Inspired by a use case presented by Matthew Patton
+-    on `StackOverflow <https://stackoverflow.com/a/45967047/1307905>`__.
+-  - fix issue 148: replace cryptic error message when using !!timestamp with an
+-    incorrectly formatted or non- scalar. Reported by FichteFoll.
+-
+-[0, 15, 32]: 2017-08-21
+-  - allow setting ``yaml.default_flow_style = None`` (default: ``False``) for
+-    for ``typ='rt'``.
+-  - fix for issue 149: multiplications on ``ScalarFloat`` now return ``float``
+-
+-[0, 15, 31]: 2017-08-15
+-  - fix Comment dumping
+-
+-[0, 15, 30]: 2017-08-14
+-  - fix for issue with "compact JSON" not parsing: ``{"in":{},"out":{}}``
+-    (reported on `StackOverflow <https://stackoverflow.com/q/45681626/1307905>`_ by
+-    `mjalkio <https://stackoverflow.com/users/5130525/mjalkio>`_
+-
+-[0, 15, 29]: 2017-08-14
+-  - fix issue #51: different indents for mappings and sequences (reported by 
+-    Alex Harvey)
+-  - fix for flow sequence/mapping as element/value of block sequence with 
+-    sequence-indent minus dash-offset not equal two.
+-
+-[0, 15, 28]: 2017-08-13
+-  - fix issue #61: merge of merge cannot be __repr__-ed (reported by Tal Liron)
+-
+-[0, 15, 27]: 2017-08-13
+-  - fix issue 62, YAML 1.2 allows ``?`` and ``:`` in plain scalars if non-ambigious
+-    (reported by nowox)
+-  - fix lists within lists which would make comments disappear
+-
+-[0, 15, 26]: 2017-08-10
+-  - fix for disappearing comment after empty flow sequence (reported by
+-    oit-tzhimmash)
+-
+-[0, 15, 25]: 2017-08-09
+-  - fix for problem with dumping (unloaded) floats (reported by eyenseo)
+-
+-[0, 15, 24]: 2017-08-09
+-  - added ScalarFloat which supports roundtripping of 23.1, 23.100,
+-    42.00E+56, 0.0, -0.0 etc. while keeping the format. Underscores in mantissas
+-    are not preserved/supported (yet, is anybody using that?).
+-  - (finally) fixed longstanding issue 23 (reported by `Antony Sottile
+-    <https://bitbucket.org/asottile/>`_), now handling comment between block
+-    mapping key and value correctly
+-  - warn on YAML 1.1 float input that is incorrect (triggered by invalid YAML
+-    provided by Cecil Curry)
+-  - allow setting of boolean representation (`false`, `true`) by using:
+-    ``yaml.boolean_representation = [u'False', u'True']``
+-
+-[0, 15, 23]: 2017-08-01
+-  - fix for round_tripping integers on 2.7.X > sys.maxint (reported by ccatterina)
+-
+-[0, 15, 22]: 2017-07-28
+-  - fix for round_tripping singe excl. mark tags doubling (reported and fix by Jan Brezina)
+-
+-[0, 15, 21]: 2017-07-25
+-  - fix for writing unicode in new API, https://stackoverflow.com/a/45281922/1307905
+-
+-[0, 15, 20]: 2017-07-23
+-  - wheels for windows including C extensions
+-
+-[0, 15, 19]: 2017-07-13
+-  - added object constructor for rt, decorator ``yaml_object`` to replace YAMLObject.
+-  - fix for problem using load_all with Path() instance
+-  - fix for load_all in combination with zero indent block style literal
+-    (``pure=True`` only!)
+-
+-[0, 15, 18]: 2017-07-04
+-  - missing ``pure`` attribute on ``YAML`` useful for implementing `!include` tag
+-    constructor for `including YAML files in a YAML file
+-    <https://stackoverflow.com/a/44913652/1307905>`_
+-  - some documentation improvements
+-  - trigger of doc build on new revision
+-
+-[0, 15, 17]: 2017-07-03
+-  - support for Unicode supplementary Plane **output** with allow_unicode
+-    (input was already supported, triggered by
+-    `this <https://stackoverflow.com/a/44875714/1307905>`_ Stack Overflow Q&A)
+-
+-[0, 15, 16]: 2017-07-01
+-  - minor typing issues (reported and fix provided by
+-    `Manvendra Singh <https://bitbucket.org/manu-chroma/>`_)
+-  - small doc improvements
+-
+-[0, 15, 15]: 2017-06-27
+-  - fix for issue 135, typ='safe' not dumping in Python 2.7
+-    (reported by Andrzej Ostrowski <https://bitbucket.org/aostr123/>`_)
+-
+-[0, 15, 14]: 2017-06-25
+-  - setup.py: change ModuleNotFoundError to ImportError (reported and fix by Asley Drake)
+-
+-[0, 15, 13]: 2017-06-24
+-  - suppress duplicate key warning on mappings with merge keys (reported by
+-    Cameron Sweeney)
+-
+-[0, 15, 12]: 2017-06-24
+-  - remove fatal dependency of setup.py on wheel package (reported by
+-    Cameron Sweeney)
+-
+-[0, 15, 11]: 2017-06-24
+-  - fix for issue 130, regression in nested merge keys (reported by
+-    `David Fee <https://bitbucket.org/dfee/>`_)
+-
+-[0, 15, 10]: 2017-06-23
+-  - top level PreservedScalarString not indented if not explicitly asked to
+-  - remove Makefile (not very useful anyway)
+-  - some mypy additions
+-
+-[0, 15, 9]: 2017-06-16
+-  - fix for issue 127: tagged scalars were always quoted and seperated
+-    by a newline when in a block sequence (reported and largely fixed by
+-    `Tommy Wang <https://bitbucket.org/twang817/>`_)
+-
+-[0, 15, 8]: 2017-06-15
+-  - allow plug-in install via ``install ruamel.yaml[jinja2]``
+-
+-[0, 15, 7]: 2017-06-14
+-  - add plug-in mechanism for load/dump pre resp. post-processing
+-
+-[0, 15, 6]: 2017-06-10
+-  - a set() with duplicate elements now throws error in rt loading
+-  - support for toplevel column zero literal/folded scalar in explicit documents
+-
+-[0, 15, 5]: 2017-06-08
+-  - repeat `load()` on a single `YAML()` instance would fail.
+-
+-(0, 15, 4) 2017-06-08: |
+-  - `transform` parameter on dump that expects a function taking a
+-    string and returning a string. This allows transformation of the output
+-    before it is written to stream.
+-  - some updates to the docs
+-
+-(0, 15, 3) 2017-06-07:
+-  - No longer try to compile C extensions on Windows. Compilation can be forced by setting
+-    the environment variable `RUAMEL_FORCE_EXT_BUILD` to some value
+-    before starting the `pip install`.
+-
+-(0, 15, 2) 2017-06-07:
+-  - update to conform to mypy 0.511:mypy --strict
+-
+-(0, 15, 1) 2017-06-07:
+-  - Any `duplicate keys  <http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys>`_
+-    in mappings generate an error (in the old API this change generates a warning until 0.16)
+-  - dependecy on ruamel.ordereddict for 2.7 now via extras_require
+-
+-(0, 15, 0) 2017-06-04:
+-  - it is now allowed to pass in a ``pathlib.Path`` as "stream" parameter to all
+-    load/dump functions
+-  - passing in a non-supported object (e.g. a string) as "stream" will result in a
+-    much more meaningful YAMLStreamError.
+-  - assigning a normal string value to an existing CommentedMap key or CommentedSeq
+-    element will result in a value cast to the previous value's type if possible.
+-
+-(0, 14, 12) 2017-05-14:
+-  - fix for issue 119, deepcopy not returning subclasses (reported and PR by
+-    Constantine Evans <cevans@evanslabs.org>)
+-
+-(0, 14, 11) 2017-05-01:
+-  - fix for issue 103 allowing implicit documents after document end marker line (``...``)
+-    in YAML 1.2
+-
+-(0, 14, 10) 2017-04-26:
+-  - fix problem with emitting using cyaml
+-
+-(0, 14, 9) 2017-04-22:
+-  - remove dependency on ``typing`` while still supporting ``mypy``
+-    (http://stackoverflow.com/a/43516781/1307905)
+-  - fix unclarity in doc that stated 2.6 is supported (reported by feetdust)
+-
+-(0, 14, 8) 2017-04-19:
+-  - fix Text not available on 3.5.0 and 3.5.1, now proactively setting version guards
+-    on all files (reported by `João Paulo Magalhães <https://bitbucket.org/jpmag/>`_)
+-
+-(0, 14, 7) 2017-04-18:
+-  - round trip of integers (decimal, octal, hex, binary) now preserve
+-    leading zero(s) padding and underscores. Underscores are presumed
+-    to be at regular distances (i.e. ``0o12_345_67`` dumps back as
+-    ``0o1_23_45_67`` as the space from the last digit to the
+-    underscore before that is the determining factor).
+-
+-(0, 14, 6) 2017-04-14:
+-  - binary, octal and hex integers are now preserved by default. This
+-    was a known deficiency. Working on this was prompted by the issue report (112)
+-    from devnoname120, as well as the additional experience with `.replace()`
+-    on `scalarstring` classes.
+-  - fix issues 114 cannot install on Buildozer (reported by mixmastamyk).
+-    Setting env. var ``RUAMEL_NO_PIP_INSTALL_CHECK`` will suppress ``pip``-check.
+-
+-(0, 14, 5) 2017-04-04:
+-  - fix issue 109 None not dumping correctly at top level (reported by Andrea Censi)
+-  - fix issue 110 .replace on Preserved/DoubleQuoted/SingleQuoted ScalarString
+-    would give back "normal" string (reported by sandres23)
+-
+-(0, 14, 4) 2017-03-31:
+-  - fix readme
+-
+-(0, 14, 3) 2017-03-31:
+-  - fix for 0o52 not being a string in YAML 1.1 (reported on
+-    `StackOverflow Q&A 43138503><http://stackoverflow.com/a/43138503/1307905>`_ by
+-    `Frank D <http://stackoverflow.com/users/7796630/frank-d>`_
+-
+-(0, 14, 2) 2017-03-23:
+-  - fix for old default pip on Ubuntu 14.04 (reported by Sébastien Maccagnoni-Munch)
+-
+-(0.14.1) 2017-03-22:
+-  - fix Text not available on 3.5.0 and 3.5.1 (reported by Charles Bouchard-Légaré)
+-
+-(0.14.0) 2017-03-21:
+-  - updates for mypy --strict
+-  - preparation for moving away from inheritance in Loader and Dumper, calls from e.g.
+-    the Representer to the Serializer.serialize() are now done via the attribute
+-    .serializer.serialize(). Usage of .serialize() outside of Serializer will be
+-    deprecated soon
+-  - some extra tests on main.py functions
+-
+-(0.13.14) 2017-02-12:
+-  - fix for issue 97, clipped block scalar followed by empty lines and comment
+-    would result in two CommentTokens of which the first was dropped.
+-    (reported by Colm O'Connor)
+-
+-(0.13.13) 2017-01-28:
+-  - fix for issue 96, prevent insertion of extra empty line if indented mapping entries
+-    are separated by an empty line (reported by Derrick Sawyer)
+-
+-(0.13.11) 2017-01-23:
+-  - allow ':' in flow style scalars if not followed by space. Also don't
+-    quote such scalar as this is no longer necessary.
+-  - add python 3.6 manylinux wheel to PyPI
+-
+-(0.13.10) 2017-01-22:
+-  - fix for issue 93, insert spurious blank line before single line comment
+-    between indented sequence elements (reported by Alex)
+-
+-(0.13.9) 2017-01-18:
+-  - fix for issue 92, wrong import name reported by the-corinthian
+-
+-(0.13.8) 2017-01-18:
+-  - fix for issue 91, when a compiler is unavailable reported by Maximilian Hils
+-  - fix for deepcopy issue with TimeStamps not preserving 'T', reported on
+-    `StackOverflow Q&A <http://stackoverflow.com/a/41577841/1307905>`_ by
+-    `Quuxplusone <http://stackoverflow.com/users/1424877/quuxplusone>`_
+-
+-(0.13.7) 2016-12-27:
+-  - fix for issue 85, constructor.py importing unicode_literals caused mypy to fail
+-    on 2.7 (reported by Peter Amstutz)
+-
+-(0.13.6) 2016-12-27:
+-  - fix for issue 83, collections.OrderedDict not representable by SafeRepresenter
+-    (reported by Frazer McLean)
+-
+-(0.13.5) 2016-12-25:
+-  - fix for issue 84, deepcopy not properly working (reported by Peter Amstutz)
+-
+-(0.13.4) 2016-12-05:
+-  - another fix for issue 82, change to non-global resolver data broke implicit type
+-    specification
+-
+-(0.13.3) 2016-12-05:
+-  - fix for issue 82, deepcopy not working (reported by code monk)
+-
+-(0.13.2) 2016-11-28:
+-  - fix for comments after empty (null) values  (reported by dsw2127 and cokelaer)
+-
+-(0.13.1) 2016-11-22:
+-  - optimisations on memory usage when loading YAML from large files (py3 -50%, py2 -85%)
+-
+-(0.13.0) 2016-11-20:
+-  - if ``load()`` or ``load_all()`` is called with only a single argument
+-    (stream or string)
+-    a UnsafeLoaderWarning will be issued once. If appropriate you can surpress this
+-    warning by filtering it. Explicitly supplying the ``Loader=ruamel.yaml.Loader``
+-    argument, will also prevent it from being issued. You should however consider
+-    using ``safe_load()``, ``safe_load_all()`` if your YAML input does not use tags.
+-  - allow adding comments before and after keys (based on
+-    `StackOveflow Q&A <http://stackoverflow.com/a/40705671/1307905>`_  by
+-    `msinn <http://stackoverflow.com/users/7185467/msinn>`_)
+-
+-(0.12.18) 2016-11-16:
+-  - another fix for numpy (re-reported independently by PaulG & Nathanial Burdic)
+-
+-(0.12.17) 2016-11-15:
+-  - only the RoundTripLoader included the Resolver that supports YAML 1.2
+-    now all loaders do (reported by mixmastamyk)
+-
+-(0.12.16) 2016-11-13:
+-  - allow dot char (and many others) in anchor name
+-    Fix issue 72 (reported by Shalon Wood)
+-  - |
+-    Slightly smarter behaviour dumping strings when no style is
+-    specified. Single string scalars that start with single quotes
+-    or have newlines now are dumped double quoted "'abc\nklm'" instead of
+-
+-      '''abc
+-
+-        klm'''
+-
+-(0.12.14) 2016-09-21:
+- - preserve round-trip sequences that are mapping keys
+-   (prompted by stackoverflow question 39595807 from Nowox)
+-
+-(0.12.13) 2016-09-15:
+- - Fix for issue #60 representation of CommentedMap with merge
+-   keys incorrect (reported by Tal Liron)
+-
+-(0.12.11) 2016-09-06:
+- - Fix issue 58 endless loop in scanning tokens (reported by
+-   Christopher Lambert)
+-
+-(0.12.10) 2016-09-05:
+- - Make previous fix depend on unicode char width (32 bit unicode support
+-   is a problem on MacOS reported by David Tagatac)
+-
+-(0.12.8) 2016-09-05:
+-   - To be ignored Unicode characters were not properly regex matched
+-     (no specific tests, PR by Haraguroicha Hsu)
+-
+-(0.12.7) 2016-09-03:
+-   - fixing issue 54 empty lines with spaces (reported by Alex Harvey)
+-
+-(0.12.6) 2016-09-03:
+-   - fixing issue 46 empty lines between top-level keys were gobbled (but
+-     not between sequence elements, nor between keys in netsted mappings
+-     (reported by Alex Harvey)
+-
+-(0.12.5) 2016-08-20:
+-  - fixing issue 45 preserving datetime formatting (submitted by altuin)
+-    Several formatting parameters are preserved with some normalisation:
+-  - preserve 'T', 't' is replaced by 'T', multiple spaces between date
+-    and time reduced to one.
+-  - optional space before timezone is removed
+-  - still using microseconds, but now rounded (.1234567 -> .123457)
+-  - Z/-5/+01:00 preserved
+-
+-(0.12.4) 2016-08-19:
+-  - Fix for issue 44: missing preserve_quotes keyword argument (reported
+-    by M. Crusoe)
+-
+-(0.12.3) 2016-08-17:
+-  - correct 'in' operation for merged CommentedMaps in round-trip mode
+-    (implementation inspired by J.Ngo, but original not working for merges)
+-  - iteration over round-trip loaded mappings, that contain merges. Also
+-    keys(), items(), values() (Py3/Py2) and iterkeys(), iteritems(),
+-    itervalues(), viewkeys(), viewitems(), viewvalues() (Py2)
+-  - reuse of anchor name now generates warning, not an error. Round-tripping such
+-    anchors works correctly. This inherited PyYAML issue was brought to attention
+-    by G. Coddut (and was long standing https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=515634)
+-    suppressing the warning::
+-
+-        import warnings
+-        from ruamel.yaml.error import ReusedAnchorWarning
+-        warnings.simplefilter("ignore", ReusedAnchorWarning)
+-
+-(0.12.2) 2016-08-16:
+-  - minor improvements based on feedback from M. Crusoe
+-    https://bitbucket.org/ruamel/yaml/issues/42/
+-
+-(0.12.0) 2016-08-16:
+-  - drop support for Python 2.6
+-  - include initial Type information (inspired by M. Crusoe)
+-
+-(0.11.15) 2016-08-07:
+-  - Change to prevent FutureWarning in NumPy, as reported by tgehring
+-    ("comparison to None will result in an elementwise object comparison in the future")
+-
+-(0.11.14) 2016-07-06:
+-  - fix preserve_quotes missing on original Loaders (as reported
+-    by Leynos, bitbucket issue 38)
+-
+-(0.11.13) 2016-07-06:
+-  - documentation only, automated linux wheels
+-
+-(0.11.12) 2016-07-06:
+-  - added support for roundtrip of single/double quoted scalars using:
+-    ruamel.yaml.round_trip_load(stream, preserve_quotes=True)
+-
+-(0.11.10) 2016-05-02:
+-
+-- added .insert(pos, key, value, comment=None) to CommentedMap
+-
+-(0.11.10) 2016-04-19:
+-
+-- indent=2, block_seq_indent=2 works as expected
+-
+-(0.11.0) 2016-02-18:
+-  - RoundTripLoader loads 1.2 by default (no sexagesimals, 012 octals nor
+-    yes/no/on/off booleans
+-
+-(0.10.11) 2015-09-17:
+-- Fix issue 13: dependency on libyaml to be installed for yaml.h
+-
+-(0.10.10) 2015-09-15:
+-- Python 3.5 tested with tox
+-- pypy full test (old PyYAML tests failed on too many open file handles)
+-
+-(0.10.6-0.10.9) 2015-09-14:
+-- Fix for issue 9
+-- Fix for issue 11: double dump losing comments
+-- Include libyaml code
+-- move code from 'py' subdir for proper namespace packaging.
+-
+-(0.10.5) 2015-08-25:
+-- preservation of newlines after block scalars. Contributed by Sam Thursfield.
+-
+-(0.10) 2015-06-22:
+-- preservation of hand crafted anchor names ( not of the form "idNNN")
+-- preservation of map merges ( <<< )
+-
+-(0.9) 2015-04-18:
+-- collections read in by the RoundTripLoader now have a ``lc`` property
+-  that can be quired for line and column ( ``lc.line`` resp. ``lc.col``)
+-
+-(0.8) 2015-04-15:
+-- bug fix for non-roundtrip save of ordereddict
+-- adding/replacing end of line comments on block style mappings/sequences
+-
+-(0.7.2) 2015-03-29:
+-- support for end-of-line comments on flow style sequences and mappings
+-
+-(0.7.1) 2015-03-27:
+-- RoundTrip capability of flow style sequences ( 'a: b, c, d' )
+-
+-(0.7) 2015-03-26:
+-- tests (currently failing) for inline sequece and non-standard spacing between
+-  block sequence dash and scalar (Anthony Sottile)
+-- initial possibility (on list, i.e. CommentedSeq) to set the flow format
+-  explicitly
+-- RoundTrip capability of flow style sequences ( 'a: b, c, d' )
+-
+-(0.6.1) 2015-03-15:
+-- setup.py changed so ruamel.ordereddict no longer is a dependency
+-  if not on CPython 2.x (used to test only for 2.x, which breaks pypy 2.5.0
+-  reported by Anthony Sottile)
+-
+-(0.6) 2015-03-11:
+-- basic support for scalars with preserved newlines
+-- html option for yaml command
+-- check if yaml C library is available before trying to compile C extension
+-- include unreleased change in PyYAML dd 20141128
+-
+-(0.5) 2015-01-14:
+-- move configobj -> YAML generator to own module
+-- added dependency on ruamel.base (based on feedback from  Sess
+-  <leycec@gmail.com>
+-
+-(0.4) 20141125:
+-- move comment classes in own module comments
+-- fix omap pre comment
+-- make !!omap and !!set take parameters. There are still some restrictions:
+-  - no comments before the !!tag
+-- extra tests
+-
+-(0.3) 20141124:
+-- fix value comment occuring as on previous line (looking like eol comment)
+-- INI conversion in yaml + tests
+-- (hidden) test in yaml for debugging with auto command
+-- fix for missing comment in middel of simple map + test
+-
+-(0.2) 20141123:
+-- add ext/_yaml.c etc to the source tree
+-- tests for yaml to work on 2.6/3.3/3.4
+-- change install so that you can include ruamel.yaml instead of ruamel.yaml.py
+-- add "yaml" utility with initial subcommands (test rt, from json)
+-
+-(0.1) 20141122:
+-- merge py2 and py3 code bases
+-- remove support for 2.5/3.0/3.1/3.2 (this merge relies on u"" as
+-  available in 3.3 and . imports not available in 2.5)
+-- tox.ini for 2.7/3.4/2.6/3.3
+-- remove lib3/ and tests/lib3 directories and content
+-- commit
+-- correct --verbose for test application
+-- DATA=changed to be relative to __file__ of code
+-- DATA using os.sep
+-- remove os.path from imports as os is already imported
+-- have test_yaml.py exit with value 0 on success, 1 on failures, 2 on
+-  error
+-- added support for octal integers starting with '0o'
+-  keep support for 01234 as well as 0o1234
+-- commit
+-- added test_roundtrip_data:
+-  requirest a .data file and .roundtrip (empty), yaml_load .data
+-  and compare dump against original.
+-- fix grammar as per David Pursehouse:
+-  https://bitbucket.org/xi/pyyaml/pull-request/5/fix-grammar-in-error-messages/diff
+-- http://www.json.org/ extra escaped char \/
+-  add .skip-ext as libyaml is not updated
+-- David Fraser: Extract a method to represent keys in mappings, so that
+-  a subclass can choose not to quote them, used in repesent_mapping
+-  https://bitbucket.org/davidfraser/pyyaml/
+-- add CommentToken and percolate through parser and composer and constructor
+-- add Comments to wrapped mapping and sequence constructs (not to scalars)
+-- generate YAML with comments
+-- initial README
+diff --git a/dynaconf/vendor_src/ruamel/yaml/LICENSE b/dynaconf/vendor_src/ruamel/yaml/LICENSE
+deleted file mode 100644
+index 5b863d3..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/LICENSE
++++ /dev/null
+@@ -1,21 +0,0 @@
+- The MIT License (MIT)
+-
+- Copyright (c) 2014-2020 Anthon van der Neut, Ruamel bvba
+-
+- Permission is hereby granted, free of charge, to any person obtaining a copy
+- of this software and associated documentation files (the "Software"), to deal
+- in the Software without restriction, including without limitation the rights
+- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+- copies of the Software, and to permit persons to whom the Software is
+- furnished to do so, subject to the following conditions:
+-
+- The above copyright notice and this permission notice shall be included in
+- all copies or substantial portions of the Software.
+-
+- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+- SOFTWARE.
+diff --git a/dynaconf/vendor_src/ruamel/yaml/MANIFEST.in b/dynaconf/vendor_src/ruamel/yaml/MANIFEST.in
+deleted file mode 100644
+index 1aa7798..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/MANIFEST.in
++++ /dev/null
+@@ -1,3 +0,0 @@
+-include README.rst LICENSE CHANGES setup.py
+-prune ext*
+-prune clib*
+diff --git a/dynaconf/vendor_src/ruamel/yaml/PKG-INFO b/dynaconf/vendor_src/ruamel/yaml/PKG-INFO
+deleted file mode 100644
+index b0ce985..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/PKG-INFO
++++ /dev/null
+@@ -1,782 +0,0 @@
+-Metadata-Version: 2.1
+-Name: ruamel.yaml
+-Version: 0.16.10
+-Summary: ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order
+-Home-page: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree
+-Author: Anthon van der Neut
+-Author-email: a.van.der.neut@ruamel.eu
+-License: MIT license
+-Description: 
+-        ruamel.yaml
+-        ===========
+-        
+-        ``ruamel.yaml`` is a YAML 1.2 loader/dumper package for Python.
+-        
+-        :version:       0.16.10
+-        :updated:       2020-02-12
+-        :documentation: http://yaml.readthedocs.io
+-        :repository:    https://bitbucket.org/ruamel/yaml
+-        :pypi:          https://pypi.org/project/ruamel.yaml/
+-        
+-        
+-        Starting with version 0.15.0 the way YAML files are loaded and dumped
+-        is changing. See the API doc for details.  Currently existing
+-        functionality will throw a warning before being changed/removed.
+-        **For production systems you should pin the version being used with
+-        ``ruamel.yaml<=0.15``**. There might be bug fixes in the 0.14 series,
+-        but new functionality is likely only to be available via the new API.
+-        
+-        If your package uses ``ruamel.yaml`` and is not listed on PyPI, drop
+-        me an email, preferably with some information on how you use the
+-        package (or a link to bitbucket/github) and I'll keep you informed
+-        when the status of the API is stable enough to make the transition.
+-        
+-        * `Overview <http://yaml.readthedocs.org/en/latest/overview.html>`_
+-        * `Installing <http://yaml.readthedocs.org/en/latest/install.html>`_
+-        * `Basic Usage <http://yaml.readthedocs.org/en/latest/basicuse.html>`_
+-        * `Details <http://yaml.readthedocs.org/en/latest/detail.html>`_
+-        * `Examples <http://yaml.readthedocs.org/en/latest/example.html>`_
+-        * `API <http://yaml.readthedocs.org/en/latest/api.html>`_
+-        * `Differences with PyYAML <http://yaml.readthedocs.org/en/latest/pyyaml.html>`_
+-        
+-        .. image:: https://readthedocs.org/projects/yaml/badge/?version=stable
+-           :target: https://yaml.readthedocs.org/en/stable
+-        
+-        .. image:: https://bestpractices.coreinfrastructure.org/projects/1128/badge
+-           :target: https://bestpractices.coreinfrastructure.org/projects/1128
+-        
+-        .. image:: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/_doc/_static/license.svg?format=raw
+-           :target: https://opensource.org/licenses/MIT
+-        
+-        .. image:: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/_doc/_static/pypi.svg?format=raw
+-           :target: https://pypi.org/project/ruamel.yaml/
+-        
+-        .. image:: https://sourceforge.net/p/oitnb/code/ci/default/tree/_doc/_static/oitnb.svg?format=raw
+-           :target: https://pypi.org/project/oitnb/
+-        
+-        .. image:: http://www.mypy-lang.org/static/mypy_badge.svg
+-           :target: http://mypy-lang.org/
+-        
+-        ChangeLog
+-        =========
+-        
+-        .. should insert NEXT: at the beginning of line for next key (with empty line)
+-        
+-        0.16.10 (2020-02-12):
+-          - (auto) updated image references in README to sourceforge
+-        
+-        0.16.9 (2020-02-11):
+-          - update CHANGES
+-        
+-        0.16.8 (2020-02-11):
+-          - update requirements so that ruamel.yaml.clib is installed for 3.8,
+-            as it has become available (via manylinux builds)
+-        
+-        0.16.7 (2020-01-30):
+-          - fix typchecking issue on TaggedScalar (reported by Jens Nielsen)
+-          - fix error in dumping literal scalar in sequence with comments before element
+-            (reported by `EJ Etherington <https://sourceforge.net/u/ejether/>`__)
+-        
+-        0.16.6 (2020-01-20):
+-          - fix empty string mapping key roundtripping with preservation of quotes as `? ''`
+-            (reported via email by Tomer Aharoni).
+-          - fix incorrect state setting in class constructor (reported by `Douglas Raillard
+-            <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__)
+-          - adjust deprecation warning test for Hashable, as that no longer warns (reported
+-            by `Jason Montleon <https://bitbucket.org/%7B8f377d12-8d5b-4069-a662-00a2674fee4e%7D/>`__)
+-        
+-        0.16.5 (2019-08-18):
+-          - allow for ``YAML(typ=['unsafe', 'pytypes'])``
+-        
+-        0.16.4 (2019-08-16):
+-          - fix output of TAG directives with # (reported by `Thomas Smith
+-            <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-        
+-        
+-        0.16.3 (2019-08-15):
+-          - split construct_object
+-          - change stuff back to keep mypy happy
+-          - move setting of version based on YAML directive to scanner, allowing to
+-            check for file version during TAG directive scanning
+-        
+-        0.16.2 (2019-08-15):
+-          - preserve YAML and TAG directives on roundtrip, correctly output #
+-            in URL for YAML 1.2 (both reported by `Thomas Smith
+-            <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-        
+-        0.16.1 (2019-08-08):
+-          - Force the use of new version of ruamel.yaml.clib (reported by `Alex Joz
+-            <https://bitbucket.org/%7B9af55900-2534-4212-976c-61339b6ffe14%7D/>`__)
+-          - Allow '#' in tag URI as these are allowed in YAML 1.2 (reported by
+-            `Thomas Smith
+-            <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-        
+-        0.16.0 (2019-07-25):
+-          - split of C source that generates .so file to ruamel.yaml.clib
+-          - duplicate keys are now an error when working with the old API as well
+-        
+-        0.15.100 (2019-07-17):
+-          - fixing issue with dumping deep-copied data from commented YAML, by
+-            providing both the memo parameter to __deepcopy__, and by allowing
+-            startmarks to be compared on their content (reported by `Theofilos
+-            Petsios
+-            <https://bitbucket.org/%7Be550bc5d-403d-4fda-820b-bebbe71796d3%7D/>`__)
+-        
+-        0.15.99 (2019-07-12):
+-          - add `py.typed` to distribution, based on a PR submitted by
+-            `Michael Crusoe
+-            <https://bitbucket.org/%7Bc9fbde69-e746-48f5-900d-34992b7860c8%7D/>`__
+-          - merge PR 40 (also by Michael Crusoe) to more accurately specify
+-            repository in the README (also reported in a misunderstood issue
+-            some time ago)
+-        
+-        0.15.98 (2019-07-09):
+-          - regenerate ext/_ruamel_yaml.c with Cython version 0.29.12, needed
+-            for Python 3.8.0b2 (reported by `John Vandenberg
+-            <https://bitbucket.org/%7B6d4e8487-3c97-4dab-a060-088ec50c682c%7D/>`__)
+-        
+-        0.15.97 (2019-06-06):
+-          - regenerate ext/_ruamel_yaml.c with Cython version 0.29.10, needed for
+-            Python 3.8.0b1
+-          - regenerate ext/_ruamel_yaml.c with Cython version 0.29.9, needed for
+-            Python 3.8.0a4 (reported by `Anthony Sottile
+-            <https://bitbucket.org/%7B569cc8ea-0d9e-41cb-94a4-19ea517324df%7D/>`__)
+-        
+-        0.15.96 (2019-05-16):
+-          - fix failure to indent comments on round-trip anchored block style
+-            scalars in block sequence (reported by `William Kimball
+-            <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-        
+-        0.15.95 (2019-05-16):
+-          - fix failure to round-trip anchored scalars in block sequence
+-            (reported by `William Kimball
+-            <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-          - wheel files for Python 3.4 no longer provided (`Python 3.4 EOL 2019-03-18
+-            <https://www.python.org/dev/peps/pep-0429/>`__)
+-        
+-        0.15.94 (2019-04-23):
+-          - fix missing line-break after end-of-file comments not ending in
+-            line-break (reported by `Philip Thompson
+-            <https://bitbucket.org/%7Be42ba205-0876-4151-bcbe-ccaea5bd13ce%7D/>`__)
+-        
+-        0.15.93 (2019-04-21):
+-          - fix failure to parse empty implicit flow mapping key
+-          - in YAML 1.1 plains scalars `y`, 'n', `Y`, and 'N' are now
+-            correctly recognised as booleans and such strings dumped quoted
+-            (reported by `Marcel Bollmann
+-            <https://bitbucket.org/%7Bd8850921-9145-4ad0-ac30-64c3bd9b036d%7D/>`__)
+-        
+-        0.15.92 (2019-04-16):
+-          - fix failure to parse empty implicit block mapping key (reported by 
+-            `Nolan W <https://bitbucket.org/i2labs/>`__)
+-        
+-        0.15.91 (2019-04-05):
+-          - allowing duplicate keys would not work for merge keys (reported by mamacdon on
+-            `StackOverflow <https://stackoverflow.com/questions/55540686/>`__ 
+-        
+-        0.15.90 (2019-04-04):
+-          - fix issue with updating `CommentedMap` from list of tuples (reported by 
+-            `Peter Henry <https://bitbucket.org/mosbasik/>`__)
+-        
+-        0.15.89 (2019-02-27):
+-          - fix for items with flow-mapping in block sequence output on single line
+-            (reported by `Zahari Dim <https://bitbucket.org/zahari_dim/>`__)
+-          - fix for safe dumping erroring in creation of representereror when dumping namedtuple
+-            (reported and solution by `Jaakko Kantojärvi <https://bitbucket.org/raphendyr/>`__)
+-        
+-        0.15.88 (2019-02-12):
+-          - fix inclusing of python code from the subpackage data (containing extra tests,
+-            reported by `Florian Apolloner <https://bitbucket.org/apollo13/>`__)
+-        
+-        0.15.87 (2019-01-22):
+-          - fix problem with empty lists and the code to reinsert merge keys (reported via email 
+-             by Zaloo)
+-        
+-        0.15.86 (2019-01-16):
+-          - reinsert merge key in its old position (reported by grumbler on
+-            `StackOverflow <https://stackoverflow.com/a/54206512/1307905>`__)
+-          - fix for issue with non-ASCII anchor names (reported and fix
+-            provided by Dandaleon Flux via email)
+-          - fix for issue when parsing flow mapping value starting with colon (in pure Python only)
+-            (reported by `FichteFoll <https://bitbucket.org/FichteFoll/>`__)
+-        
+-        0.15.85 (2019-01-08):
+-          - the types used by ``SafeConstructor`` for mappings and sequences can
+-            now by set by assigning to ``XXXConstructor.yaml_base_dict_type``
+-            (and ``..._list_type``), preventing the need to copy two methods
+-            with 50+ lines that had ``var = {}`` hardcoded.  (Implemented to
+-            help solve an feature request by `Anthony Sottile
+-            <https://bitbucket.org/asottile/>`__ in an easier way)
+-        
+-        0.15.84 (2019-01-07):
+-          - fix for ``CommentedMap.copy()`` not returning ``CommentedMap``, let alone copying comments etc.
+-            (reported by `Anthony Sottile <https://bitbucket.org/asottile/>`__)
+-        
+-        0.15.83 (2019-01-02):
+-          - fix for bug in roundtripping aliases used as key (reported via email by Zaloo)
+-        
+-        0.15.82 (2018-12-28):
+-          - anchors and aliases on scalar int, float, string and bool are now preserved. Anchors
+-            do not need a referring alias for these (reported by 
+-            `Alex Harvey <https://bitbucket.org/alexharv074/>`__)
+-          - anchors no longer lost on tagged objects when roundtripping (reported by `Zaloo 
+-            <https://bitbucket.org/zaloo/>`__)
+-        
+-        0.15.81 (2018-12-06):
+-          - fix issue dumping methods of metaclass derived classes (reported and fix provided
+-            by `Douglas Raillard <https://bitbucket.org/DouglasRaillard/>`__)
+-        
+-        0.15.80 (2018-11-26):
+-          - fix issue emitting BEL character when round-tripping invalid folded input
+-            (reported by Isaac on `StackOverflow <https://stackoverflow.com/a/53471217/1307905>`__)
+-            
+-        0.15.79 (2018-11-21):
+-          - fix issue with anchors nested deeper than alias (reported by gaFF on
+-            `StackOverflow <https://stackoverflow.com/a/53397781/1307905>`__)
+-        
+-        0.15.78 (2018-11-15):
+-          - fix setup issue for 3.8 (reported by `Sidney Kuyateh 
+-            <https://bitbucket.org/autinerd/>`__)
+-        
+-        0.15.77 (2018-11-09):
+-          - setting `yaml.sort_base_mapping_type_on_output = False`, will prevent
+-            explicit sorting by keys in the base representer of mappings. Roundtrip
+-            already did not do this. Usage only makes real sense for Python 3.6+
+-            (feature request by `Sebastian Gerber <https://bitbucket.org/spacemanspiff2007/>`__).
+-          - implement Python version check in YAML metadata in ``_test/test_z_data.py``
+-        
+-        0.15.76 (2018-11-01):
+-          - fix issue with empty mapping and sequence loaded as flow-style
+-            (mapping reported by `Min RK <https://bitbucket.org/minrk/>`__, sequence
+-            by `Maged Ahmed <https://bitbucket.org/maged2/>`__)
+-        
+-        0.15.75 (2018-10-27):
+-          - fix issue with single '?' scalar (reported by `Terrance 
+-            <https://bitbucket.org/OllieTerrance/>`__)
+-          - fix issue with duplicate merge keys (prompted by `answering 
+-            <https://stackoverflow.com/a/52852106/1307905>`__ a 
+-            `StackOverflow question <https://stackoverflow.com/q/52851168/1307905>`__
+-            by `math <https://stackoverflow.com/users/1355634/math>`__)
+-        
+-        0.15.74 (2018-10-17):
+-          - fix dropping of comment on rt before sequence item that is sequence item
+-            (reported by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-        
+-        0.15.73 (2018-10-16):
+-          - fix irregular output on pre-comment in sequence within sequence (reported
+-            by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-          - allow non-compact (i.e. next line) dumping sequence/mapping within sequence.
+-        
+-        0.15.72 (2018-10-06):
+-          - fix regression on explicit 1.1 loading with the C based scanner/parser
+-            (reported by `Tomas Vavra <https://bitbucket.org/xtomik/>`__)
+-        
+-        0.15.71 (2018-09-26):
+-          - some of the tests now live in YAML files in the 
+-            `yaml.data <https://bitbucket.org/ruamel/yaml.data>`__ repository. 
+-            ``_test/test_z_data.py`` processes these.
+-          - fix regression where handcrafted CommentedMaps could not be initiated (reported by 
+-            `Dan Helfman <https://bitbucket.org/dhelfman/>`__)
+-          - fix regression with non-root literal scalars that needed indent indicator
+-            (reported by `Clark Breyman <https://bitbucket.org/clarkbreyman/>`__)
+-          - tag:yaml.org,2002:python/object/apply now also uses __qualname__ on PY3
+-            (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-          - issue with self-referring object creation
+-            (reported and fix by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-        
+-        0.15.70 (2018-09-21):
+-          - reverted CommentedMap and CommentedSeq to subclass ordereddict resp. list,
+-            reimplemented merge maps so that both ``dict(**commented_map_instance)`` and JSON
+-            dumping works. This also allows checking with ``isinstance()`` on ``dict`` resp. ``list``.
+-            (Proposed by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__, with feedback
+-            from `blhsing <https://stackoverflow.com/users/6890912/blhsing>`__ on
+-            `StackOverflow <https://stackoverflow.com/q/52314186/1307905>`__)
+-        
+-        0.15.69 (2018-09-20):
+-          - fix issue with dump_all gobbling end-of-document comments on parsing
+-            (reported by `Pierre B. <https://bitbucket.org/octplane/>`__)
+-        
+-        0.15.68 (2018-09-20):
+-          - fix issue with parsabel, but incorrect output with nested flow-style sequences
+-            (reported by `Dougal Seeley <https://bitbucket.org/dseeley/>`__)
+-          - fix issue with loading Python objects that have __setstate__ and recursion in parameters
+-            (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-        
+-        0.15.67 (2018-09-19):
+-          - fix issue with extra space inserted with non-root literal strings 
+-            (Issue reported and PR with fix provided by 
+-            `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-        
+-        0.15.66 (2018-09-07):
+-          - fix issue with fold indicating characters inserted in safe_load-ed folded strings
+-            (reported by `Maximilian Hils <https://bitbucket.org/mhils/>`__).
+-        
+-        0.15.65 (2018-09-07):
+-          - fix issue #232 revert to throw ParserError for unexcpected ``]``
+-            and ``}`` instead of IndexError. (Issue reported and PR with fix
+-            provided by `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-          - added ``key`` and ``reverse`` parameter (suggested by Jannik Klemm via email)
+-          - indent root level literal scalars that have directive or document end markers
+-            at the beginning of a line
+-        
+-        0.15.64 (2018-08-30):
+-          - support round-trip of tagged sequences: ``!Arg [a, {b: 1}]``
+-          - single entry mappings in flow sequences now written by default without braces,
+-            set ``yaml.brace_single_entry_mapping_in_flow_sequence=True`` to force
+-            getting ``[a, {b: 1}, {c: {d: 2}}]`` instead of the default ``[a, b: 1, c: {d: 2}]``
+-          - fix issue when roundtripping floats starting with a dot such as ``.5``
+-            (reported by `Harrison Gregg <https://bitbucket.org/HarrisonGregg/>`__)
+-        
+-        0.15.63 (2018-08-29):
+-          - small fix only necessary for Windows users that don't use wheels.
+-        
+-        0.15.62 (2018-08-29):
+-          - C based reader/scanner & emitter now allow setting of 1.2 as YAML version.
+-            ** The loading/dumping is still YAML 1.1 code**, so use the common subset of
+-            YAML 1.2 and 1.1 (reported by `Ge Yang <https://bitbucket.org/yangge/>`__)
+-        
+-        0.15.61 (2018-08-23):
+-          - support for round-tripping folded style scalars (initially requested 
+-            by `Johnathan Viduchinsky <https://bitbucket.org/johnathanvidu/>`__)
+-          - update of C code
+-          - speed up of scanning (~30% depending on the input)
+-        
+-        0.15.60 (2018-08-18):
+-          - again allow single entry map in flow sequence context (reported by 
+-            `Lee Goolsbee <https://bitbucket.org/lgoolsbee/>`__)
+-          - cleanup for mypy 
+-          - spurious print in library (reported by 
+-            `Lele Gaifax <https://bitbucket.org/lele/>`__), now automatically checked 
+-        
+-        0.15.59 (2018-08-17):
+-          - issue with C based loader and leading zeros (reported by 
+-            `Tom Hamilton Stubber <https://bitbucket.org/TomHamiltonStubber/>`__)
+-        
+-        0.15.58 (2018-08-17):
+-          - simple mappings can now be used as keys when round-tripping::
+-        
+-              {a: 1, b: 2}: hello world
+-              
+-            although using the obvious operations (del, popitem) on the key will
+-            fail, you can mutilate it by going through its attributes. If you load the
+-            above YAML in `d`, then changing the value is cumbersome:
+-        
+-                d = {CommentedKeyMap([('a', 1), ('b', 2)]): "goodbye"}
+-        
+-            and changing the key even more so:
+-        
+-                d[CommentedKeyMap([('b', 1), ('a', 2)])] = d.pop(
+-                             CommentedKeyMap([('a', 1), ('b', 2)]))
+-        
+-            (you can use a `dict` instead of a list of tuples (or ordereddict), but that might result
+-            in a different order, of the keys of the key, in the output)
+-          - check integers to dump with 1.2 patterns instead of 1.1 (reported by 
+-            `Lele Gaifax <https://bitbucket.org/lele/>`__)
+-          
+-        
+-        0.15.57 (2018-08-15):
+-          - Fix that CommentedSeq could no longer be used in adding or do a sort
+-            (reported by `Christopher Wright <https://bitbucket.org/CJ-Wright4242/>`__)
+-        
+-        0.15.56 (2018-08-15):
+-          - fix issue with ``python -O`` optimizing away code (reported, and detailed cause
+-            pinpointed, by `Alex Grönholm <https://bitbucket.org/agronholm/>`__)
+-        
+-        0.15.55 (2018-08-14):
+-          - unmade ``CommentedSeq`` a subclass of ``list``. It is now
+-            indirectly a subclass of the standard
+-            ``collections.abc.MutableSequence`` (without .abc if you are
+-            still on Python2.7). If you do ``isinstance(yaml.load('[1, 2]'),
+-            list)``) anywhere in your code replace ``list`` with
+-            ``MutableSequence``.  Directly, ``CommentedSeq`` is a subclass of
+-            the abstract baseclass ``ruamel.yaml.compat.MutableScliceableSequence``,
+-            with the result that *(extended) slicing is supported on 
+-            ``CommentedSeq``*.
+-            (reported by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__)
+-          - duplicate keys (or their values) with non-ascii now correctly
+-            report in Python2, instead of raising a Unicode error.
+-            (Reported by `Jonathan Pyle <https://bitbucket.org/jonathan_pyle/>`__)
+-        
+-        0.15.54 (2018-08-13):
+-          - fix issue where a comment could pop-up twice in the output (reported by 
+-            `Mike Kazantsev <https://bitbucket.org/mk_fg/>`__ and by 
+-            `Nate Peterson <https://bitbucket.org/ndpete21/>`__)
+-          - fix issue where JSON object (mapping) without spaces was not parsed
+-            properly (reported by `Marc Schmidt <https://bitbucket.org/marcj/>`__)
+-          - fix issue where comments after empty flow-style mappings were not emitted
+-            (reported by `Qinfench Chen <https://bitbucket.org/flyin5ish/>`__)
+-        
+-        0.15.53 (2018-08-12):
+-          - fix issue with flow style mapping with comments gobbled newline (reported
+-            by `Christopher Lambert <https://bitbucket.org/XN137/>`__)
+-          - fix issue where single '+' under YAML 1.2 was interpreted as
+-            integer, erroring out (reported by `Jethro Yu
+-            <https://bitbucket.org/jcppkkk/>`__)
+-        
+-        0.15.52 (2018-08-09):
+-          - added `.copy()` mapping representation for round-tripping
+-            (``CommentedMap``) to fix incomplete copies of merged mappings
+-            (reported by `Will Richards
+-            <https://bitbucket.org/will_richards/>`__) 
+-          - Also unmade that class a subclass of ordereddict to solve incorrect behaviour
+-            for ``{**merged-mapping}`` and ``dict(**merged-mapping)`` (reported independently by
+-            `Tim Olsson <https://bitbucket.org/tgolsson/>`__ and 
+-            `Filip Matzner <https://bitbucket.org/FloopCZ/>`__)
+-        
+-        0.15.51 (2018-08-08):
+-          - Fix method name dumps (were not dotted) and loads (reported by `Douglas Raillard 
+-            <https://bitbucket.org/DouglasRaillard/>`__)
+-          - Fix spurious trailing white-space caused when the comment start
+-            column was no longer reached and there was no actual EOL comment
+-            (e.g. following empty line) and doing substitutions, or when
+-            quotes around scalars got dropped.  (reported by `Thomas Guillet
+-            <https://bitbucket.org/guillett/>`__)
+-        
+-        0.15.50 (2018-08-05):
+-          - Allow ``YAML()`` as a context manager for output, thereby making it much easier
+-            to generate multi-documents in a stream. 
+-          - Fix issue with incorrect type information for `load()` and `dump()` (reported 
+-            by `Jimbo Jim <https://bitbucket.org/jimbo1qaz/>`__)
+-        
+-        0.15.49 (2018-08-05):
+-          - fix preservation of leading newlines in root level literal style scalar,
+-            and preserve comment after literal style indicator (``|  # some comment``)
+-            Both needed for round-tripping multi-doc streams in 
+-            `ryd <https://pypi.org/project/ryd/>`__.
+-        
+-        0.15.48 (2018-08-03):
+-          - housekeeping: ``oitnb`` for formatting, mypy 0.620 upgrade and conformity
+-        
+-        0.15.47 (2018-07-31):
+-          - fix broken 3.6 manylinux1, the result of an unclean ``build`` (reported by 
+-            `Roman Sichnyi <https://bitbucket.org/rsichnyi-gl/>`__)
+-        
+-        
+-        0.15.46 (2018-07-29):
+-          - fixed DeprecationWarning for importing from ``collections`` on 3.7
+-            (issue 210, reported by `Reinoud Elhorst
+-            <https://bitbucket.org/reinhrst/>`__). It was `difficult to find
+-            why tox/pytest did not report
+-            <https://stackoverflow.com/q/51573204/1307905>`__ and as time
+-            consuming to actually `fix
+-            <https://stackoverflow.com/a/51573205/1307905>`__ the tests.
+-        
+-        0.15.45 (2018-07-26):
+-          - After adding failing test for ``YAML.load_all(Path())``, remove StopIteration 
+-            (PR provided by `Zachary Buhman <https://bitbucket.org/buhman/>`__,
+-            also reported by `Steven Hiscocks <https://bitbucket.org/sdhiscocks/>`__.
+-        
+-        0.15.44 (2018-07-14):
+-          - Correct loading plain scalars consisting of numerals only and
+-            starting with `0`, when not explicitly specifying YAML version
+-            1.1. This also fixes the issue about dumping string `'019'` as
+-            plain scalars as reported by `Min RK
+-            <https://bitbucket.org/minrk/>`__, that prompted this chance.
+-        
+-        0.15.43 (2018-07-12):
+-          - merge PR33: Python2.7 on Windows is narrow, but has no
+-            ``sysconfig.get_config_var('Py_UNICODE_SIZE')``. (merge provided by
+-            `Marcel Bargull <https://bitbucket.org/mbargull/>`__)
+-          - ``register_class()`` now returns class (proposed by
+-            `Mike Nerone <https://bitbucket.org/Manganeez/>`__}
+-        
+-        0.15.42 (2018-07-01):
+-          - fix regression showing only on narrow Python 2.7 (py27mu) builds
+-            (with help from
+-            `Marcel Bargull <https://bitbucket.org/mbargull/>`__ and
+-            `Colm O'Connor <https://bitbucket.org/colmoconnorgithub/>`__).
+-          - run pre-commit ``tox`` on Python 2.7 wide and narrow, as well as
+-            3.4/3.5/3.6/3.7/pypy
+-        
+-        0.15.41 (2018-06-27):
+-          - add detection of C-compile failure (investigation prompted by
+-            `StackOverlow <https://stackoverflow.com/a/51057399/1307905>`__ by
+-            `Emmanuel Blot <https://stackoverflow.com/users/8233409/emmanuel-blot>`__),
+-            which was removed while no longer dependent on ``libyaml``, C-extensions
+-            compilation still needs a compiler though.
+-        
+-        0.15.40 (2018-06-18):
+-          - added links to landing places as suggested in issue 190 by
+-            `KostisA <https://bitbucket.org/ankostis/>`__
+-          - fixes issue #201: decoding unicode escaped tags on Python2, reported
+-            by `Dan Abolafia <https://bitbucket.org/danabo/>`__
+-        
+-        0.15.39 (2018-06-17):
+-          - merge PR27 improving package startup time (and loading when regexp not
+-            actually used), provided by
+-            `Marcel Bargull <https://bitbucket.org/mbargull/>`__
+-        
+-        0.15.38 (2018-06-13):
+-          - fix for losing precision when roundtripping floats by
+-            `Rolf Wojtech <https://bitbucket.org/asomov/>`__
+-          - fix for hardcoded dir separator not working for Windows by
+-            `Nuno André <https://bitbucket.org/nu_no/>`__
+-          - typo fix by `Andrey Somov <https://bitbucket.org/asomov/>`__
+-        
+-        0.15.37 (2018-03-21):
+-          - again trying to create installable files for 187
+-        
+-        0.15.36 (2018-02-07):
+-          - fix issue 187, incompatibility of C extension with 3.7 (reported by
+-            Daniel Blanchard)
+-        
+-        0.15.35 (2017-12-03):
+-          - allow ``None`` as stream when specifying ``transform`` parameters to
+-            ``YAML.dump()``.
+-            This is useful if the transforming function doesn't return a meaningful value
+-            (inspired by `StackOverflow <https://stackoverflow.com/q/47614862/1307905>`__ by
+-            `rsaw <https://stackoverflow.com/users/406281/rsaw>`__).
+-        
+-        0.15.34 (2017-09-17):
+-          - fix for issue 157: CDumper not dumping floats (reported by Jan Smitka)
+-        
+-        0.15.33 (2017-08-31):
+-          - support for "undefined" round-tripping tagged scalar objects (in addition to
+-            tagged mapping object). Inspired by a use case presented by Matthew Patton
+-            on `StackOverflow <https://stackoverflow.com/a/45967047/1307905>`__.
+-          - fix issue 148: replace cryptic error message when using !!timestamp with an
+-            incorrectly formatted or non- scalar. Reported by FichteFoll.
+-        
+-        0.15.32 (2017-08-21):
+-          - allow setting ``yaml.default_flow_style = None`` (default: ``False``) for
+-            for ``typ='rt'``.
+-          - fix for issue 149: multiplications on ``ScalarFloat`` now return ``float``
+-            (reported by jan.brezina@tul.cz)
+-        
+-        0.15.31 (2017-08-15):
+-          - fix Comment dumping
+-        
+-        0.15.30 (2017-08-14):
+-          - fix for issue with "compact JSON" not parsing: ``{"in":{},"out":{}}``
+-            (reported on `StackOverflow <https://stackoverflow.com/q/45681626/1307905>`__ by
+-            `mjalkio <https://stackoverflow.com/users/5130525/mjalkio>`_
+-        
+-        0.15.29 (2017-08-14):
+-          - fix issue #51: different indents for mappings and sequences (reported by
+-            Alex Harvey)
+-          - fix for flow sequence/mapping as element/value of block sequence with
+-            sequence-indent minus dash-offset not equal two.
+-        
+-        0.15.28 (2017-08-13):
+-          - fix issue #61: merge of merge cannot be __repr__-ed (reported by Tal Liron)
+-        
+-        0.15.27 (2017-08-13):
+-          - fix issue 62, YAML 1.2 allows ``?`` and ``:`` in plain scalars if non-ambigious
+-            (reported by nowox)
+-          - fix lists within lists which would make comments disappear
+-        
+-        0.15.26 (2017-08-10):
+-          - fix for disappearing comment after empty flow sequence (reported by
+-            oit-tzhimmash)
+-        
+-        0.15.25 (2017-08-09):
+-          - fix for problem with dumping (unloaded) floats (reported by eyenseo)
+-        
+-        0.15.24 (2017-08-09):
+-          - added ScalarFloat which supports roundtripping of 23.1, 23.100,
+-            42.00E+56, 0.0, -0.0 etc. while keeping the format. Underscores in mantissas
+-            are not preserved/supported (yet, is anybody using that?).
+-          - (finally) fixed longstanding issue 23 (reported by `Antony Sottile
+-            <https://bitbucket.org/asottile/>`__), now handling comment between block
+-            mapping key and value correctly
+-          - warn on YAML 1.1 float input that is incorrect (triggered by invalid YAML
+-            provided by Cecil Curry)
+-          - allow setting of boolean representation (`false`, `true`) by using:
+-            ``yaml.boolean_representation = [u'False', u'True']``
+-        
+-        0.15.23 (2017-08-01):
+-          - fix for round_tripping integers on 2.7.X > sys.maxint (reported by ccatterina)
+-        
+-        0.15.22 (2017-07-28):
+-          - fix for round_tripping singe excl. mark tags doubling (reported and fix by Jan Brezina)
+-        
+-        0.15.21 (2017-07-25):
+-          - fix for writing unicode in new API, (reported on
+-            `StackOverflow <https://stackoverflow.com/a/45281922/1307905>`__
+-        
+-        0.15.20 (2017-07-23):
+-          - wheels for windows including C extensions
+-        
+-        0.15.19 (2017-07-13):
+-          - added object constructor for rt, decorator ``yaml_object`` to replace YAMLObject.
+-          - fix for problem using load_all with Path() instance
+-          - fix for load_all in combination with zero indent block style literal
+-            (``pure=True`` only!)
+-        
+-        0.15.18 (2017-07-04):
+-          - missing ``pure`` attribute on ``YAML`` useful for implementing `!include` tag
+-            constructor for `including YAML files in a YAML file
+-            <https://stackoverflow.com/a/44913652/1307905>`__
+-          - some documentation improvements
+-          - trigger of doc build on new revision
+-        
+-        0.15.17 (2017-07-03):
+-          - support for Unicode supplementary Plane **output**
+-            (input was already supported, triggered by
+-            `this <https://stackoverflow.com/a/44875714/1307905>`__ Stack Overflow Q&A)
+-        
+-        0.15.16 (2017-07-01):
+-          - minor typing issues (reported and fix provided by
+-            `Manvendra Singh <https://bitbucket.org/manu-chroma/>`__
+-          - small doc improvements
+-        
+-        0.15.15 (2017-06-27):
+-          - fix for issue 135, typ='safe' not dumping in Python 2.7
+-            (reported by Andrzej Ostrowski <https://bitbucket.org/aostr123/>`__)
+-        
+-        0.15.14 (2017-06-25):
+-          - fix for issue 133, in setup.py: change ModuleNotFoundError to
+-            ImportError (reported and fix by
+-            `Asley Drake  <https://github.com/aldraco>`__)
+-        
+-        0.15.13 (2017-06-24):
+-          - suppress duplicate key warning on mappings with merge keys (reported by
+-            Cameron Sweeney)
+-        
+-        0.15.12 (2017-06-24):
+-          - remove fatal dependency of setup.py on wheel package (reported by
+-            Cameron Sweeney)
+-        
+-        0.15.11 (2017-06-24):
+-          - fix for issue 130, regression in nested merge keys (reported by
+-            `David Fee <https://bitbucket.org/dfee/>`__)
+-        
+-        0.15.10 (2017-06-23):
+-          - top level PreservedScalarString not indented if not explicitly asked to
+-          - remove Makefile (not very useful anyway)
+-          - some mypy additions
+-        
+-        0.15.9 (2017-06-16):
+-          - fix for issue 127: tagged scalars were always quoted and seperated
+-            by a newline when in a block sequence (reported and largely fixed by
+-            `Tommy Wang <https://bitbucket.org/twang817/>`__)
+-        
+-        0.15.8 (2017-06-15):
+-          - allow plug-in install via ``install ruamel.yaml[jinja2]``
+-        
+-        0.15.7 (2017-06-14):
+-          - add plug-in mechanism for load/dump pre resp. post-processing
+-        
+-        0.15.6 (2017-06-10):
+-          - a set() with duplicate elements now throws error in rt loading
+-          - support for toplevel column zero literal/folded scalar in explicit documents
+-        
+-        0.15.5 (2017-06-08):
+-          - repeat `load()` on a single `YAML()` instance would fail.
+-        
+-        0.15.4 (2017-06-08):
+-          - `transform` parameter on dump that expects a function taking a
+-            string and returning a string. This allows transformation of the output
+-            before it is written to stream. This forces creation of the complete output in memory!
+-          - some updates to the docs
+-        
+-        0.15.3 (2017-06-07):
+-          - No longer try to compile C extensions on Windows. Compilation can be forced by setting
+-            the environment variable `RUAMEL_FORCE_EXT_BUILD` to some value
+-            before starting the `pip install`.
+-        
+-        0.15.2 (2017-06-07):
+-          - update to conform to mypy 0.511: mypy --strict
+-        
+-        0.15.1 (2017-06-07):
+-          - `duplicate keys  <http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys>`__
+-            in mappings generate an error (in the old API this change generates a warning until 0.16)
+-          - dependecy on ruamel.ordereddict for 2.7 now via extras_require
+-        
+-        0.15.0 (2017-06-04):
+-          - it is now allowed to pass in a ``pathlib.Path`` as "stream" parameter to all
+-            load/dump functions
+-          - passing in a non-supported object (e.g. a string) as "stream" will result in a
+-            much more meaningful YAMLStreamError.
+-          - assigning a normal string value to an existing CommentedMap key or CommentedSeq
+-            element will result in a value cast to the previous value's type if possible.
+-          - added ``YAML`` class for new API
+-        
+-        0.14.12 (2017-05-14):
+-          - fix for issue 119, deepcopy not returning subclasses (reported and PR by
+-            Constantine Evans <cevans@evanslabs.org>)
+-        
+-        0.14.11 (2017-05-01):
+-          - fix for issue 103 allowing implicit documents after document end marker line (``...``)
+-            in YAML 1.2
+-        
+-        0.14.10 (2017-04-26):
+-          - fix problem with emitting using cyaml
+-        
+-        0.14.9 (2017-04-22):
+-          - remove dependency on ``typing`` while still supporting ``mypy``
+-            (http://stackoverflow.com/a/43516781/1307905)
+-          - fix unclarity in doc that stated 2.6 is supported (reported by feetdust)
+-        
+-        0.14.8 (2017-04-19):
+-          - fix Text not available on 3.5.0 and 3.5.1, now proactively setting version guards
+-            on all files (reported by `João Paulo Magalhães <https://bitbucket.org/jpmag/>`__)
+-        
+-        0.14.7 (2017-04-18):
+-          - round trip of integers (decimal, octal, hex, binary) now preserve
+-            leading zero(s) padding and underscores. Underscores are presumed
+-            to be at regular distances (i.e. ``0o12_345_67`` dumps back as
+-            ``0o1_23_45_67`` as the space from the last digit to the
+-            underscore before that is the determining factor).
+-        
+-        0.14.6 (2017-04-14):
+-          - binary, octal and hex integers are now preserved by default. This
+-            was a known deficiency. Working on this was prompted by the issue report (112)
+-            from devnoname120, as well as the additional experience with `.replace()`
+-            on `scalarstring` classes.
+-          - fix issues 114: cannot install on Buildozer (reported by mixmastamyk).
+-            Setting env. var ``RUAMEL_NO_PIP_INSTALL_CHECK`` will suppress ``pip``-check.
+-        
+-        0.14.5 (2017-04-04):
+-          - fix issue 109: None not dumping correctly at top level (reported by Andrea Censi)
+-          - fix issue 110: .replace on Preserved/DoubleQuoted/SingleQuoted ScalarString
+-            would give back "normal" string (reported by sandres23)
+-        
+-        0.14.4 (2017-03-31):
+-          - fix readme
+-        
+-        0.14.3 (2017-03-31):
+-          - fix for 0o52 not being a string in YAML 1.1 (reported on
+-            `StackOverflow Q&A 43138503 <http://stackoverflow.com/a/43138503/1307905>`__ by
+-            `Frank D <http://stackoverflow.com/users/7796630/frank-d>`__)
+-        
+-        0.14.2 (2017-03-23):
+-          - fix for old default pip on Ubuntu 14.04 (reported by Sébastien Maccagnoni-Munch)
+-        
+-        0.14.1 (2017-03-22):
+-          - fix Text not available on 3.5.0 and 3.5.1 (reported by Charles Bouchard-Légaré)
+-        
+-        0.14.0 (2017-03-21):
+-          - updates for mypy --strict
+-          - preparation for moving away from inheritance in Loader and Dumper, calls from e.g.
+-            the Representer to the Serializer.serialize() are now done via the attribute
+-            .serializer.serialize(). Usage of .serialize() outside of Serializer will be
+-            deprecated soon
+-          - some extra tests on main.py functions
+-        
+-        ----
+-        
+-        For older changes see the file
+-        `CHANGES <https://bitbucket.org/ruamel/yaml/src/default/CHANGES>`_
+-        
+-Keywords: yaml 1.2 parser round-trip preserve quotes order config
+-Platform: UNKNOWN
+-Classifier: Development Status :: 4 - Beta
+-Classifier: Intended Audience :: Developers
+-Classifier: License :: OSI Approved :: MIT License
+-Classifier: Operating System :: OS Independent
+-Classifier: Programming Language :: Python
+-Classifier: Programming Language :: Python :: 2.7
+-Classifier: Programming Language :: Python :: 3.5
+-Classifier: Programming Language :: Python :: 3.6
+-Classifier: Programming Language :: Python :: 3.7
+-Classifier: Programming Language :: Python :: 3.8
+-Classifier: Programming Language :: Python :: Implementation :: CPython
+-Classifier: Programming Language :: Python :: Implementation :: Jython
+-Classifier: Programming Language :: Python :: Implementation :: PyPy
+-Classifier: Topic :: Software Development :: Libraries :: Python Modules
+-Classifier: Topic :: Text Processing :: Markup
+-Classifier: Typing :: Typed
+-Description-Content-Type: text/x-rst
+-Provides-Extra: docs
+-Provides-Extra: jinja2
+diff --git a/dynaconf/vendor_src/ruamel/yaml/README.rst b/dynaconf/vendor_src/ruamel/yaml/README.rst
+deleted file mode 100644
+index 2a99cb9..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/README.rst
++++ /dev/null
+@@ -1,752 +0,0 @@
+-
+-ruamel.yaml
+-===========
+-
+-``ruamel.yaml`` is a YAML 1.2 loader/dumper package for Python.
+-
+-:version:       0.16.10
+-:updated:       2020-02-12
+-:documentation: http://yaml.readthedocs.io
+-:repository:    https://bitbucket.org/ruamel/yaml
+-:pypi:          https://pypi.org/project/ruamel.yaml/
+-
+-
+-Starting with version 0.15.0 the way YAML files are loaded and dumped
+-is changing. See the API doc for details.  Currently existing
+-functionality will throw a warning before being changed/removed.
+-**For production systems you should pin the version being used with
+-``ruamel.yaml<=0.15``**. There might be bug fixes in the 0.14 series,
+-but new functionality is likely only to be available via the new API.
+-
+-If your package uses ``ruamel.yaml`` and is not listed on PyPI, drop
+-me an email, preferably with some information on how you use the
+-package (or a link to bitbucket/github) and I'll keep you informed
+-when the status of the API is stable enough to make the transition.
+-
+-* `Overview <http://yaml.readthedocs.org/en/latest/overview.html>`_
+-* `Installing <http://yaml.readthedocs.org/en/latest/install.html>`_
+-* `Basic Usage <http://yaml.readthedocs.org/en/latest/basicuse.html>`_
+-* `Details <http://yaml.readthedocs.org/en/latest/detail.html>`_
+-* `Examples <http://yaml.readthedocs.org/en/latest/example.html>`_
+-* `API <http://yaml.readthedocs.org/en/latest/api.html>`_
+-* `Differences with PyYAML <http://yaml.readthedocs.org/en/latest/pyyaml.html>`_
+-
+-.. image:: https://readthedocs.org/projects/yaml/badge/?version=stable
+-   :target: https://yaml.readthedocs.org/en/stable
+-
+-.. image:: https://bestpractices.coreinfrastructure.org/projects/1128/badge
+-   :target: https://bestpractices.coreinfrastructure.org/projects/1128
+-
+-.. image:: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/_doc/_static/license.svg?format=raw
+-   :target: https://opensource.org/licenses/MIT
+-
+-.. image:: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/_doc/_static/pypi.svg?format=raw
+-   :target: https://pypi.org/project/ruamel.yaml/
+-
+-.. image:: https://sourceforge.net/p/oitnb/code/ci/default/tree/_doc/_static/oitnb.svg?format=raw
+-   :target: https://pypi.org/project/oitnb/
+-
+-.. image:: http://www.mypy-lang.org/static/mypy_badge.svg
+-   :target: http://mypy-lang.org/
+-
+-ChangeLog
+-=========
+-
+-.. should insert NEXT: at the beginning of line for next key (with empty line)
+-
+-0.16.10 (2020-02-12):
+-  - (auto) updated image references in README to sourceforge
+-
+-0.16.9 (2020-02-11):
+-  - update CHANGES
+-
+-0.16.8 (2020-02-11):
+-  - update requirements so that ruamel.yaml.clib is installed for 3.8,
+-    as it has become available (via manylinux builds)
+-
+-0.16.7 (2020-01-30):
+-  - fix typchecking issue on TaggedScalar (reported by Jens Nielsen)
+-  - fix error in dumping literal scalar in sequence with comments before element
+-    (reported by `EJ Etherington <https://sourceforge.net/u/ejether/>`__)
+-
+-0.16.6 (2020-01-20):
+-  - fix empty string mapping key roundtripping with preservation of quotes as `? ''`
+-    (reported via email by Tomer Aharoni).
+-  - fix incorrect state setting in class constructor (reported by `Douglas Raillard
+-    <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__)
+-  - adjust deprecation warning test for Hashable, as that no longer warns (reported
+-    by `Jason Montleon <https://bitbucket.org/%7B8f377d12-8d5b-4069-a662-00a2674fee4e%7D/>`__)
+-
+-0.16.5 (2019-08-18):
+-  - allow for ``YAML(typ=['unsafe', 'pytypes'])``
+-
+-0.16.4 (2019-08-16):
+-  - fix output of TAG directives with # (reported by `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-
+-0.16.3 (2019-08-15):
+-  - split construct_object
+-  - change stuff back to keep mypy happy
+-  - move setting of version based on YAML directive to scanner, allowing to
+-    check for file version during TAG directive scanning
+-
+-0.16.2 (2019-08-15):
+-  - preserve YAML and TAG directives on roundtrip, correctly output #
+-    in URL for YAML 1.2 (both reported by `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-0.16.1 (2019-08-08):
+-  - Force the use of new version of ruamel.yaml.clib (reported by `Alex Joz
+-    <https://bitbucket.org/%7B9af55900-2534-4212-976c-61339b6ffe14%7D/>`__)
+-  - Allow '#' in tag URI as these are allowed in YAML 1.2 (reported by
+-    `Thomas Smith
+-    <https://bitbucket.org/%7Bd4c57a72-f041-4843-8217-b4d48b6ece2f%7D/>`__)
+-
+-0.16.0 (2019-07-25):
+-  - split of C source that generates .so file to ruamel.yaml.clib
+-  - duplicate keys are now an error when working with the old API as well
+-
+-0.15.100 (2019-07-17):
+-  - fixing issue with dumping deep-copied data from commented YAML, by
+-    providing both the memo parameter to __deepcopy__, and by allowing
+-    startmarks to be compared on their content (reported by `Theofilos
+-    Petsios
+-    <https://bitbucket.org/%7Be550bc5d-403d-4fda-820b-bebbe71796d3%7D/>`__)
+-
+-0.15.99 (2019-07-12):
+-  - add `py.typed` to distribution, based on a PR submitted by
+-    `Michael Crusoe
+-    <https://bitbucket.org/%7Bc9fbde69-e746-48f5-900d-34992b7860c8%7D/>`__
+-  - merge PR 40 (also by Michael Crusoe) to more accurately specify
+-    repository in the README (also reported in a misunderstood issue
+-    some time ago)
+-
+-0.15.98 (2019-07-09):
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.12, needed
+-    for Python 3.8.0b2 (reported by `John Vandenberg
+-    <https://bitbucket.org/%7B6d4e8487-3c97-4dab-a060-088ec50c682c%7D/>`__)
+-
+-0.15.97 (2019-06-06):
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.10, needed for
+-    Python 3.8.0b1
+-  - regenerate ext/_ruamel_yaml.c with Cython version 0.29.9, needed for
+-    Python 3.8.0a4 (reported by `Anthony Sottile
+-    <https://bitbucket.org/%7B569cc8ea-0d9e-41cb-94a4-19ea517324df%7D/>`__)
+-
+-0.15.96 (2019-05-16):
+-  - fix failure to indent comments on round-trip anchored block style
+-    scalars in block sequence (reported by `William Kimball
+-    <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-
+-0.15.95 (2019-05-16):
+-  - fix failure to round-trip anchored scalars in block sequence
+-    (reported by `William Kimball
+-    <https://bitbucket.org/%7Bba35ed20-4bb0-46f8-bb5d-c29871e86a22%7D/>`__)
+-  - wheel files for Python 3.4 no longer provided (`Python 3.4 EOL 2019-03-18
+-    <https://www.python.org/dev/peps/pep-0429/>`__)
+-
+-0.15.94 (2019-04-23):
+-  - fix missing line-break after end-of-file comments not ending in
+-    line-break (reported by `Philip Thompson
+-    <https://bitbucket.org/%7Be42ba205-0876-4151-bcbe-ccaea5bd13ce%7D/>`__)
+-
+-0.15.93 (2019-04-21):
+-  - fix failure to parse empty implicit flow mapping key
+-  - in YAML 1.1 plains scalars `y`, 'n', `Y`, and 'N' are now
+-    correctly recognised as booleans and such strings dumped quoted
+-    (reported by `Marcel Bollmann
+-    <https://bitbucket.org/%7Bd8850921-9145-4ad0-ac30-64c3bd9b036d%7D/>`__)
+-
+-0.15.92 (2019-04-16):
+-  - fix failure to parse empty implicit block mapping key (reported by 
+-    `Nolan W <https://bitbucket.org/i2labs/>`__)
+-
+-0.15.91 (2019-04-05):
+-  - allowing duplicate keys would not work for merge keys (reported by mamacdon on
+-    `StackOverflow <https://stackoverflow.com/questions/55540686/>`__ 
+-
+-0.15.90 (2019-04-04):
+-  - fix issue with updating `CommentedMap` from list of tuples (reported by 
+-    `Peter Henry <https://bitbucket.org/mosbasik/>`__)
+-
+-0.15.89 (2019-02-27):
+-  - fix for items with flow-mapping in block sequence output on single line
+-    (reported by `Zahari Dim <https://bitbucket.org/zahari_dim/>`__)
+-  - fix for safe dumping erroring in creation of representereror when dumping namedtuple
+-    (reported and solution by `Jaakko Kantojärvi <https://bitbucket.org/raphendyr/>`__)
+-
+-0.15.88 (2019-02-12):
+-  - fix inclusing of python code from the subpackage data (containing extra tests,
+-    reported by `Florian Apolloner <https://bitbucket.org/apollo13/>`__)
+-
+-0.15.87 (2019-01-22):
+-  - fix problem with empty lists and the code to reinsert merge keys (reported via email 
+-     by Zaloo)
+-
+-0.15.86 (2019-01-16):
+-  - reinsert merge key in its old position (reported by grumbler on
+-    `StackOverflow <https://stackoverflow.com/a/54206512/1307905>`__)
+-  - fix for issue with non-ASCII anchor names (reported and fix
+-    provided by Dandaleon Flux via email)
+-  - fix for issue when parsing flow mapping value starting with colon (in pure Python only)
+-    (reported by `FichteFoll <https://bitbucket.org/FichteFoll/>`__)
+-
+-0.15.85 (2019-01-08):
+-  - the types used by ``SafeConstructor`` for mappings and sequences can
+-    now by set by assigning to ``XXXConstructor.yaml_base_dict_type``
+-    (and ``..._list_type``), preventing the need to copy two methods
+-    with 50+ lines that had ``var = {}`` hardcoded.  (Implemented to
+-    help solve an feature request by `Anthony Sottile
+-    <https://bitbucket.org/asottile/>`__ in an easier way)
+-
+-0.15.84 (2019-01-07):
+-  - fix for ``CommentedMap.copy()`` not returning ``CommentedMap``, let alone copying comments etc.
+-    (reported by `Anthony Sottile <https://bitbucket.org/asottile/>`__)
+-
+-0.15.83 (2019-01-02):
+-  - fix for bug in roundtripping aliases used as key (reported via email by Zaloo)
+-
+-0.15.82 (2018-12-28):
+-  - anchors and aliases on scalar int, float, string and bool are now preserved. Anchors
+-    do not need a referring alias for these (reported by 
+-    `Alex Harvey <https://bitbucket.org/alexharv074/>`__)
+-  - anchors no longer lost on tagged objects when roundtripping (reported by `Zaloo 
+-    <https://bitbucket.org/zaloo/>`__)
+-
+-0.15.81 (2018-12-06):
+-  - fix issue dumping methods of metaclass derived classes (reported and fix provided
+-    by `Douglas Raillard <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-0.15.80 (2018-11-26):
+-  - fix issue emitting BEL character when round-tripping invalid folded input
+-    (reported by Isaac on `StackOverflow <https://stackoverflow.com/a/53471217/1307905>`__)
+-    
+-0.15.79 (2018-11-21):
+-  - fix issue with anchors nested deeper than alias (reported by gaFF on
+-    `StackOverflow <https://stackoverflow.com/a/53397781/1307905>`__)
+-
+-0.15.78 (2018-11-15):
+-  - fix setup issue for 3.8 (reported by `Sidney Kuyateh 
+-    <https://bitbucket.org/autinerd/>`__)
+-
+-0.15.77 (2018-11-09):
+-  - setting `yaml.sort_base_mapping_type_on_output = False`, will prevent
+-    explicit sorting by keys in the base representer of mappings. Roundtrip
+-    already did not do this. Usage only makes real sense for Python 3.6+
+-    (feature request by `Sebastian Gerber <https://bitbucket.org/spacemanspiff2007/>`__).
+-  - implement Python version check in YAML metadata in ``_test/test_z_data.py``
+-
+-0.15.76 (2018-11-01):
+-  - fix issue with empty mapping and sequence loaded as flow-style
+-    (mapping reported by `Min RK <https://bitbucket.org/minrk/>`__, sequence
+-    by `Maged Ahmed <https://bitbucket.org/maged2/>`__)
+-
+-0.15.75 (2018-10-27):
+-  - fix issue with single '?' scalar (reported by `Terrance 
+-    <https://bitbucket.org/OllieTerrance/>`__)
+-  - fix issue with duplicate merge keys (prompted by `answering 
+-    <https://stackoverflow.com/a/52852106/1307905>`__ a 
+-    `StackOverflow question <https://stackoverflow.com/q/52851168/1307905>`__
+-    by `math <https://stackoverflow.com/users/1355634/math>`__)
+-
+-0.15.74 (2018-10-17):
+-  - fix dropping of comment on rt before sequence item that is sequence item
+-    (reported by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-
+-0.15.73 (2018-10-16):
+-  - fix irregular output on pre-comment in sequence within sequence (reported
+-    by `Thorsten Kampe <https://bitbucket.org/thorstenkampe/>`__)
+-  - allow non-compact (i.e. next line) dumping sequence/mapping within sequence.
+-
+-0.15.72 (2018-10-06):
+-  - fix regression on explicit 1.1 loading with the C based scanner/parser
+-    (reported by `Tomas Vavra <https://bitbucket.org/xtomik/>`__)
+-
+-0.15.71 (2018-09-26):
+-  - some of the tests now live in YAML files in the 
+-    `yaml.data <https://bitbucket.org/ruamel/yaml.data>`__ repository. 
+-    ``_test/test_z_data.py`` processes these.
+-  - fix regression where handcrafted CommentedMaps could not be initiated (reported by 
+-    `Dan Helfman <https://bitbucket.org/dhelfman/>`__)
+-  - fix regression with non-root literal scalars that needed indent indicator
+-    (reported by `Clark Breyman <https://bitbucket.org/clarkbreyman/>`__)
+-  - tag:yaml.org,2002:python/object/apply now also uses __qualname__ on PY3
+-    (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-  - issue with self-referring object creation
+-    (reported and fix by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-0.15.70 (2018-09-21):
+-  - reverted CommentedMap and CommentedSeq to subclass ordereddict resp. list,
+-    reimplemented merge maps so that both ``dict(**commented_map_instance)`` and JSON
+-    dumping works. This also allows checking with ``isinstance()`` on ``dict`` resp. ``list``.
+-    (Proposed by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__, with feedback
+-    from `blhsing <https://stackoverflow.com/users/6890912/blhsing>`__ on
+-    `StackOverflow <https://stackoverflow.com/q/52314186/1307905>`__)
+-
+-0.15.69 (2018-09-20):
+-  - fix issue with dump_all gobbling end-of-document comments on parsing
+-    (reported by `Pierre B. <https://bitbucket.org/octplane/>`__)
+-
+-0.15.68 (2018-09-20):
+-  - fix issue with parsabel, but incorrect output with nested flow-style sequences
+-    (reported by `Dougal Seeley <https://bitbucket.org/dseeley/>`__)
+-  - fix issue with loading Python objects that have __setstate__ and recursion in parameters
+-    (reported by `Douglas RAILLARD <https://bitbucket.org/DouglasRaillard/>`__)
+-
+-0.15.67 (2018-09-19):
+-  - fix issue with extra space inserted with non-root literal strings 
+-    (Issue reported and PR with fix provided by 
+-    `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-
+-0.15.66 (2018-09-07):
+-  - fix issue with fold indicating characters inserted in safe_load-ed folded strings
+-    (reported by `Maximilian Hils <https://bitbucket.org/mhils/>`__).
+-
+-0.15.65 (2018-09-07):
+-  - fix issue #232 revert to throw ParserError for unexcpected ``]``
+-    and ``}`` instead of IndexError. (Issue reported and PR with fix
+-    provided by `Naomi Seyfer <https://bitbucket.org/sixolet/>`__.)
+-  - added ``key`` and ``reverse`` parameter (suggested by Jannik Klemm via email)
+-  - indent root level literal scalars that have directive or document end markers
+-    at the beginning of a line
+-
+-0.15.64 (2018-08-30):
+-  - support round-trip of tagged sequences: ``!Arg [a, {b: 1}]``
+-  - single entry mappings in flow sequences now written by default without braces,
+-    set ``yaml.brace_single_entry_mapping_in_flow_sequence=True`` to force
+-    getting ``[a, {b: 1}, {c: {d: 2}}]`` instead of the default ``[a, b: 1, c: {d: 2}]``
+-  - fix issue when roundtripping floats starting with a dot such as ``.5``
+-    (reported by `Harrison Gregg <https://bitbucket.org/HarrisonGregg/>`__)
+-
+-0.15.63 (2018-08-29):
+-  - small fix only necessary for Windows users that don't use wheels.
+-
+-0.15.62 (2018-08-29):
+-  - C based reader/scanner & emitter now allow setting of 1.2 as YAML version.
+-    ** The loading/dumping is still YAML 1.1 code**, so use the common subset of
+-    YAML 1.2 and 1.1 (reported by `Ge Yang <https://bitbucket.org/yangge/>`__)
+-
+-0.15.61 (2018-08-23):
+-  - support for round-tripping folded style scalars (initially requested 
+-    by `Johnathan Viduchinsky <https://bitbucket.org/johnathanvidu/>`__)
+-  - update of C code
+-  - speed up of scanning (~30% depending on the input)
+-
+-0.15.60 (2018-08-18):
+-  - again allow single entry map in flow sequence context (reported by 
+-    `Lee Goolsbee <https://bitbucket.org/lgoolsbee/>`__)
+-  - cleanup for mypy 
+-  - spurious print in library (reported by 
+-    `Lele Gaifax <https://bitbucket.org/lele/>`__), now automatically checked 
+-
+-0.15.59 (2018-08-17):
+-  - issue with C based loader and leading zeros (reported by 
+-    `Tom Hamilton Stubber <https://bitbucket.org/TomHamiltonStubber/>`__)
+-
+-0.15.58 (2018-08-17):
+-  - simple mappings can now be used as keys when round-tripping::
+-
+-      {a: 1, b: 2}: hello world
+-      
+-    although using the obvious operations (del, popitem) on the key will
+-    fail, you can mutilate it by going through its attributes. If you load the
+-    above YAML in `d`, then changing the value is cumbersome:
+-
+-        d = {CommentedKeyMap([('a', 1), ('b', 2)]): "goodbye"}
+-
+-    and changing the key even more so:
+-
+-        d[CommentedKeyMap([('b', 1), ('a', 2)])] = d.pop(
+-                     CommentedKeyMap([('a', 1), ('b', 2)]))
+-
+-    (you can use a `dict` instead of a list of tuples (or ordereddict), but that might result
+-    in a different order, of the keys of the key, in the output)
+-  - check integers to dump with 1.2 patterns instead of 1.1 (reported by 
+-    `Lele Gaifax <https://bitbucket.org/lele/>`__)
+-  
+-
+-0.15.57 (2018-08-15):
+-  - Fix that CommentedSeq could no longer be used in adding or do a sort
+-    (reported by `Christopher Wright <https://bitbucket.org/CJ-Wright4242/>`__)
+-
+-0.15.56 (2018-08-15):
+-  - fix issue with ``python -O`` optimizing away code (reported, and detailed cause
+-    pinpointed, by `Alex Grönholm <https://bitbucket.org/agronholm/>`__)
+-
+-0.15.55 (2018-08-14):
+-  - unmade ``CommentedSeq`` a subclass of ``list``. It is now
+-    indirectly a subclass of the standard
+-    ``collections.abc.MutableSequence`` (without .abc if you are
+-    still on Python2.7). If you do ``isinstance(yaml.load('[1, 2]'),
+-    list)``) anywhere in your code replace ``list`` with
+-    ``MutableSequence``.  Directly, ``CommentedSeq`` is a subclass of
+-    the abstract baseclass ``ruamel.yaml.compat.MutableScliceableSequence``,
+-    with the result that *(extended) slicing is supported on 
+-    ``CommentedSeq``*.
+-    (reported by `Stuart Berg <https://bitbucket.org/stuarteberg/>`__)
+-  - duplicate keys (or their values) with non-ascii now correctly
+-    report in Python2, instead of raising a Unicode error.
+-    (Reported by `Jonathan Pyle <https://bitbucket.org/jonathan_pyle/>`__)
+-
+-0.15.54 (2018-08-13):
+-  - fix issue where a comment could pop-up twice in the output (reported by 
+-    `Mike Kazantsev <https://bitbucket.org/mk_fg/>`__ and by 
+-    `Nate Peterson <https://bitbucket.org/ndpete21/>`__)
+-  - fix issue where JSON object (mapping) without spaces was not parsed
+-    properly (reported by `Marc Schmidt <https://bitbucket.org/marcj/>`__)
+-  - fix issue where comments after empty flow-style mappings were not emitted
+-    (reported by `Qinfench Chen <https://bitbucket.org/flyin5ish/>`__)
+-
+-0.15.53 (2018-08-12):
+-  - fix issue with flow style mapping with comments gobbled newline (reported
+-    by `Christopher Lambert <https://bitbucket.org/XN137/>`__)
+-  - fix issue where single '+' under YAML 1.2 was interpreted as
+-    integer, erroring out (reported by `Jethro Yu
+-    <https://bitbucket.org/jcppkkk/>`__)
+-
+-0.15.52 (2018-08-09):
+-  - added `.copy()` mapping representation for round-tripping
+-    (``CommentedMap``) to fix incomplete copies of merged mappings
+-    (reported by `Will Richards
+-    <https://bitbucket.org/will_richards/>`__) 
+-  - Also unmade that class a subclass of ordereddict to solve incorrect behaviour
+-    for ``{**merged-mapping}`` and ``dict(**merged-mapping)`` (reported independently by
+-    `Tim Olsson <https://bitbucket.org/tgolsson/>`__ and 
+-    `Filip Matzner <https://bitbucket.org/FloopCZ/>`__)
+-
+-0.15.51 (2018-08-08):
+-  - Fix method name dumps (were not dotted) and loads (reported by `Douglas Raillard 
+-    <https://bitbucket.org/DouglasRaillard/>`__)
+-  - Fix spurious trailing white-space caused when the comment start
+-    column was no longer reached and there was no actual EOL comment
+-    (e.g. following empty line) and doing substitutions, or when
+-    quotes around scalars got dropped.  (reported by `Thomas Guillet
+-    <https://bitbucket.org/guillett/>`__)
+-
+-0.15.50 (2018-08-05):
+-  - Allow ``YAML()`` as a context manager for output, thereby making it much easier
+-    to generate multi-documents in a stream. 
+-  - Fix issue with incorrect type information for `load()` and `dump()` (reported 
+-    by `Jimbo Jim <https://bitbucket.org/jimbo1qaz/>`__)
+-
+-0.15.49 (2018-08-05):
+-  - fix preservation of leading newlines in root level literal style scalar,
+-    and preserve comment after literal style indicator (``|  # some comment``)
+-    Both needed for round-tripping multi-doc streams in 
+-    `ryd <https://pypi.org/project/ryd/>`__.
+-
+-0.15.48 (2018-08-03):
+-  - housekeeping: ``oitnb`` for formatting, mypy 0.620 upgrade and conformity
+-
+-0.15.47 (2018-07-31):
+-  - fix broken 3.6 manylinux1, the result of an unclean ``build`` (reported by 
+-    `Roman Sichnyi <https://bitbucket.org/rsichnyi-gl/>`__)
+-
+-
+-0.15.46 (2018-07-29):
+-  - fixed DeprecationWarning for importing from ``collections`` on 3.7
+-    (issue 210, reported by `Reinoud Elhorst
+-    <https://bitbucket.org/reinhrst/>`__). It was `difficult to find
+-    why tox/pytest did not report
+-    <https://stackoverflow.com/q/51573204/1307905>`__ and as time
+-    consuming to actually `fix
+-    <https://stackoverflow.com/a/51573205/1307905>`__ the tests.
+-
+-0.15.45 (2018-07-26):
+-  - After adding failing test for ``YAML.load_all(Path())``, remove StopIteration 
+-    (PR provided by `Zachary Buhman <https://bitbucket.org/buhman/>`__,
+-    also reported by `Steven Hiscocks <https://bitbucket.org/sdhiscocks/>`__.
+-
+-0.15.44 (2018-07-14):
+-  - Correct loading plain scalars consisting of numerals only and
+-    starting with `0`, when not explicitly specifying YAML version
+-    1.1. This also fixes the issue about dumping string `'019'` as
+-    plain scalars as reported by `Min RK
+-    <https://bitbucket.org/minrk/>`__, that prompted this chance.
+-
+-0.15.43 (2018-07-12):
+-  - merge PR33: Python2.7 on Windows is narrow, but has no
+-    ``sysconfig.get_config_var('Py_UNICODE_SIZE')``. (merge provided by
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__)
+-  - ``register_class()`` now returns class (proposed by
+-    `Mike Nerone <https://bitbucket.org/Manganeez/>`__}
+-
+-0.15.42 (2018-07-01):
+-  - fix regression showing only on narrow Python 2.7 (py27mu) builds
+-    (with help from
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__ and
+-    `Colm O'Connor <https://bitbucket.org/colmoconnorgithub/>`__).
+-  - run pre-commit ``tox`` on Python 2.7 wide and narrow, as well as
+-    3.4/3.5/3.6/3.7/pypy
+-
+-0.15.41 (2018-06-27):
+-  - add detection of C-compile failure (investigation prompted by
+-    `StackOverlow <https://stackoverflow.com/a/51057399/1307905>`__ by
+-    `Emmanuel Blot <https://stackoverflow.com/users/8233409/emmanuel-blot>`__),
+-    which was removed while no longer dependent on ``libyaml``, C-extensions
+-    compilation still needs a compiler though.
+-
+-0.15.40 (2018-06-18):
+-  - added links to landing places as suggested in issue 190 by
+-    `KostisA <https://bitbucket.org/ankostis/>`__
+-  - fixes issue #201: decoding unicode escaped tags on Python2, reported
+-    by `Dan Abolafia <https://bitbucket.org/danabo/>`__
+-
+-0.15.39 (2018-06-17):
+-  - merge PR27 improving package startup time (and loading when regexp not
+-    actually used), provided by
+-    `Marcel Bargull <https://bitbucket.org/mbargull/>`__
+-
+-0.15.38 (2018-06-13):
+-  - fix for losing precision when roundtripping floats by
+-    `Rolf Wojtech <https://bitbucket.org/asomov/>`__
+-  - fix for hardcoded dir separator not working for Windows by
+-    `Nuno André <https://bitbucket.org/nu_no/>`__
+-  - typo fix by `Andrey Somov <https://bitbucket.org/asomov/>`__
+-
+-0.15.37 (2018-03-21):
+-  - again trying to create installable files for 187
+-
+-0.15.36 (2018-02-07):
+-  - fix issue 187, incompatibility of C extension with 3.7 (reported by
+-    Daniel Blanchard)
+-
+-0.15.35 (2017-12-03):
+-  - allow ``None`` as stream when specifying ``transform`` parameters to
+-    ``YAML.dump()``.
+-    This is useful if the transforming function doesn't return a meaningful value
+-    (inspired by `StackOverflow <https://stackoverflow.com/q/47614862/1307905>`__ by
+-    `rsaw <https://stackoverflow.com/users/406281/rsaw>`__).
+-
+-0.15.34 (2017-09-17):
+-  - fix for issue 157: CDumper not dumping floats (reported by Jan Smitka)
+-
+-0.15.33 (2017-08-31):
+-  - support for "undefined" round-tripping tagged scalar objects (in addition to
+-    tagged mapping object). Inspired by a use case presented by Matthew Patton
+-    on `StackOverflow <https://stackoverflow.com/a/45967047/1307905>`__.
+-  - fix issue 148: replace cryptic error message when using !!timestamp with an
+-    incorrectly formatted or non- scalar. Reported by FichteFoll.
+-
+-0.15.32 (2017-08-21):
+-  - allow setting ``yaml.default_flow_style = None`` (default: ``False``) for
+-    for ``typ='rt'``.
+-  - fix for issue 149: multiplications on ``ScalarFloat`` now return ``float``
+-    (reported by jan.brezina@tul.cz)
+-
+-0.15.31 (2017-08-15):
+-  - fix Comment dumping
+-
+-0.15.30 (2017-08-14):
+-  - fix for issue with "compact JSON" not parsing: ``{"in":{},"out":{}}``
+-    (reported on `StackOverflow <https://stackoverflow.com/q/45681626/1307905>`__ by
+-    `mjalkio <https://stackoverflow.com/users/5130525/mjalkio>`_
+-
+-0.15.29 (2017-08-14):
+-  - fix issue #51: different indents for mappings and sequences (reported by
+-    Alex Harvey)
+-  - fix for flow sequence/mapping as element/value of block sequence with
+-    sequence-indent minus dash-offset not equal two.
+-
+-0.15.28 (2017-08-13):
+-  - fix issue #61: merge of merge cannot be __repr__-ed (reported by Tal Liron)
+-
+-0.15.27 (2017-08-13):
+-  - fix issue 62, YAML 1.2 allows ``?`` and ``:`` in plain scalars if non-ambigious
+-    (reported by nowox)
+-  - fix lists within lists which would make comments disappear
+-
+-0.15.26 (2017-08-10):
+-  - fix for disappearing comment after empty flow sequence (reported by
+-    oit-tzhimmash)
+-
+-0.15.25 (2017-08-09):
+-  - fix for problem with dumping (unloaded) floats (reported by eyenseo)
+-
+-0.15.24 (2017-08-09):
+-  - added ScalarFloat which supports roundtripping of 23.1, 23.100,
+-    42.00E+56, 0.0, -0.0 etc. while keeping the format. Underscores in mantissas
+-    are not preserved/supported (yet, is anybody using that?).
+-  - (finally) fixed longstanding issue 23 (reported by `Antony Sottile
+-    <https://bitbucket.org/asottile/>`__), now handling comment between block
+-    mapping key and value correctly
+-  - warn on YAML 1.1 float input that is incorrect (triggered by invalid YAML
+-    provided by Cecil Curry)
+-  - allow setting of boolean representation (`false`, `true`) by using:
+-    ``yaml.boolean_representation = [u'False', u'True']``
+-
+-0.15.23 (2017-08-01):
+-  - fix for round_tripping integers on 2.7.X > sys.maxint (reported by ccatterina)
+-
+-0.15.22 (2017-07-28):
+-  - fix for round_tripping singe excl. mark tags doubling (reported and fix by Jan Brezina)
+-
+-0.15.21 (2017-07-25):
+-  - fix for writing unicode in new API, (reported on
+-    `StackOverflow <https://stackoverflow.com/a/45281922/1307905>`__
+-
+-0.15.20 (2017-07-23):
+-  - wheels for windows including C extensions
+-
+-0.15.19 (2017-07-13):
+-  - added object constructor for rt, decorator ``yaml_object`` to replace YAMLObject.
+-  - fix for problem using load_all with Path() instance
+-  - fix for load_all in combination with zero indent block style literal
+-    (``pure=True`` only!)
+-
+-0.15.18 (2017-07-04):
+-  - missing ``pure`` attribute on ``YAML`` useful for implementing `!include` tag
+-    constructor for `including YAML files in a YAML file
+-    <https://stackoverflow.com/a/44913652/1307905>`__
+-  - some documentation improvements
+-  - trigger of doc build on new revision
+-
+-0.15.17 (2017-07-03):
+-  - support for Unicode supplementary Plane **output**
+-    (input was already supported, triggered by
+-    `this <https://stackoverflow.com/a/44875714/1307905>`__ Stack Overflow Q&A)
+-
+-0.15.16 (2017-07-01):
+-  - minor typing issues (reported and fix provided by
+-    `Manvendra Singh <https://bitbucket.org/manu-chroma/>`__
+-  - small doc improvements
+-
+-0.15.15 (2017-06-27):
+-  - fix for issue 135, typ='safe' not dumping in Python 2.7
+-    (reported by Andrzej Ostrowski <https://bitbucket.org/aostr123/>`__)
+-
+-0.15.14 (2017-06-25):
+-  - fix for issue 133, in setup.py: change ModuleNotFoundError to
+-    ImportError (reported and fix by
+-    `Asley Drake  <https://github.com/aldraco>`__)
+-
+-0.15.13 (2017-06-24):
+-  - suppress duplicate key warning on mappings with merge keys (reported by
+-    Cameron Sweeney)
+-
+-0.15.12 (2017-06-24):
+-  - remove fatal dependency of setup.py on wheel package (reported by
+-    Cameron Sweeney)
+-
+-0.15.11 (2017-06-24):
+-  - fix for issue 130, regression in nested merge keys (reported by
+-    `David Fee <https://bitbucket.org/dfee/>`__)
+-
+-0.15.10 (2017-06-23):
+-  - top level PreservedScalarString not indented if not explicitly asked to
+-  - remove Makefile (not very useful anyway)
+-  - some mypy additions
+-
+-0.15.9 (2017-06-16):
+-  - fix for issue 127: tagged scalars were always quoted and seperated
+-    by a newline when in a block sequence (reported and largely fixed by
+-    `Tommy Wang <https://bitbucket.org/twang817/>`__)
+-
+-0.15.8 (2017-06-15):
+-  - allow plug-in install via ``install ruamel.yaml[jinja2]``
+-
+-0.15.7 (2017-06-14):
+-  - add plug-in mechanism for load/dump pre resp. post-processing
+-
+-0.15.6 (2017-06-10):
+-  - a set() with duplicate elements now throws error in rt loading
+-  - support for toplevel column zero literal/folded scalar in explicit documents
+-
+-0.15.5 (2017-06-08):
+-  - repeat `load()` on a single `YAML()` instance would fail.
+-
+-0.15.4 (2017-06-08):
+-  - `transform` parameter on dump that expects a function taking a
+-    string and returning a string. This allows transformation of the output
+-    before it is written to stream. This forces creation of the complete output in memory!
+-  - some updates to the docs
+-
+-0.15.3 (2017-06-07):
+-  - No longer try to compile C extensions on Windows. Compilation can be forced by setting
+-    the environment variable `RUAMEL_FORCE_EXT_BUILD` to some value
+-    before starting the `pip install`.
+-
+-0.15.2 (2017-06-07):
+-  - update to conform to mypy 0.511: mypy --strict
+-
+-0.15.1 (2017-06-07):
+-  - `duplicate keys  <http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys>`__
+-    in mappings generate an error (in the old API this change generates a warning until 0.16)
+-  - dependecy on ruamel.ordereddict for 2.7 now via extras_require
+-
+-0.15.0 (2017-06-04):
+-  - it is now allowed to pass in a ``pathlib.Path`` as "stream" parameter to all
+-    load/dump functions
+-  - passing in a non-supported object (e.g. a string) as "stream" will result in a
+-    much more meaningful YAMLStreamError.
+-  - assigning a normal string value to an existing CommentedMap key or CommentedSeq
+-    element will result in a value cast to the previous value's type if possible.
+-  - added ``YAML`` class for new API
+-
+-0.14.12 (2017-05-14):
+-  - fix for issue 119, deepcopy not returning subclasses (reported and PR by
+-    Constantine Evans <cevans@evanslabs.org>)
+-
+-0.14.11 (2017-05-01):
+-  - fix for issue 103 allowing implicit documents after document end marker line (``...``)
+-    in YAML 1.2
+-
+-0.14.10 (2017-04-26):
+-  - fix problem with emitting using cyaml
+-
+-0.14.9 (2017-04-22):
+-  - remove dependency on ``typing`` while still supporting ``mypy``
+-    (http://stackoverflow.com/a/43516781/1307905)
+-  - fix unclarity in doc that stated 2.6 is supported (reported by feetdust)
+-
+-0.14.8 (2017-04-19):
+-  - fix Text not available on 3.5.0 and 3.5.1, now proactively setting version guards
+-    on all files (reported by `João Paulo Magalhães <https://bitbucket.org/jpmag/>`__)
+-
+-0.14.7 (2017-04-18):
+-  - round trip of integers (decimal, octal, hex, binary) now preserve
+-    leading zero(s) padding and underscores. Underscores are presumed
+-    to be at regular distances (i.e. ``0o12_345_67`` dumps back as
+-    ``0o1_23_45_67`` as the space from the last digit to the
+-    underscore before that is the determining factor).
+-
+-0.14.6 (2017-04-14):
+-  - binary, octal and hex integers are now preserved by default. This
+-    was a known deficiency. Working on this was prompted by the issue report (112)
+-    from devnoname120, as well as the additional experience with `.replace()`
+-    on `scalarstring` classes.
+-  - fix issues 114: cannot install on Buildozer (reported by mixmastamyk).
+-    Setting env. var ``RUAMEL_NO_PIP_INSTALL_CHECK`` will suppress ``pip``-check.
+-
+-0.14.5 (2017-04-04):
+-  - fix issue 109: None not dumping correctly at top level (reported by Andrea Censi)
+-  - fix issue 110: .replace on Preserved/DoubleQuoted/SingleQuoted ScalarString
+-    would give back "normal" string (reported by sandres23)
+-
+-0.14.4 (2017-03-31):
+-  - fix readme
+-
+-0.14.3 (2017-03-31):
+-  - fix for 0o52 not being a string in YAML 1.1 (reported on
+-    `StackOverflow Q&A 43138503 <http://stackoverflow.com/a/43138503/1307905>`__ by
+-    `Frank D <http://stackoverflow.com/users/7796630/frank-d>`__)
+-
+-0.14.2 (2017-03-23):
+-  - fix for old default pip on Ubuntu 14.04 (reported by Sébastien Maccagnoni-Munch)
+-
+-0.14.1 (2017-03-22):
+-  - fix Text not available on 3.5.0 and 3.5.1 (reported by Charles Bouchard-Légaré)
+-
+-0.14.0 (2017-03-21):
+-  - updates for mypy --strict
+-  - preparation for moving away from inheritance in Loader and Dumper, calls from e.g.
+-    the Representer to the Serializer.serialize() are now done via the attribute
+-    .serializer.serialize(). Usage of .serialize() outside of Serializer will be
+-    deprecated soon
+-  - some extra tests on main.py functions
+-
+-----
+-
+-For older changes see the file
+-`CHANGES <https://bitbucket.org/ruamel/yaml/src/default/CHANGES>`_
+diff --git a/dynaconf/vendor_src/ruamel/yaml/__init__.py b/dynaconf/vendor_src/ruamel/yaml/__init__.py
+deleted file mode 100644
+index 8663a56..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/__init__.py
++++ /dev/null
+@@ -1,60 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function, absolute_import, division, unicode_literals
+-
+-if False:  # MYPY
+-    from typing import Dict, Any  # NOQA
+-
+-_package_data = dict(
+-    full_package_name='ruamel.yaml',
+-    version_info=(0, 16, 10),
+-    __version__='0.16.10',
+-    author='Anthon van der Neut',
+-    author_email='a.van.der.neut@ruamel.eu',
+-    description='ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order',  # NOQA
+-    entry_points=None,
+-    since=2014,
+-    extras_require={':platform_python_implementation=="CPython" and python_version<="2.7"': [
+-            'ruamel.ordereddict',
+-        ], ':platform_python_implementation=="CPython" and python_version<"3.9"': [
+-            'ruamel.yaml.clib>=0.1.2',
+-        ], 'jinja2': ['ruamel.yaml.jinja2>=0.2'], 'docs': ['ryd']},
+-    # NOQA
+-    # test='#include "ext/yaml.h"\n\nint main(int argc, char* argv[])\n{\nyaml_parser_t parser;\nparser = parser;  /* prevent warning */\nreturn 0;\n}\n',  # NOQA
+-    classifiers=[
+-            'Programming Language :: Python :: 2.7',
+-            'Programming Language :: Python :: 3.5',
+-            'Programming Language :: Python :: 3.6',
+-            'Programming Language :: Python :: 3.7',
+-            'Programming Language :: Python :: 3.8',
+-            'Programming Language :: Python :: Implementation :: CPython',
+-            'Programming Language :: Python :: Implementation :: PyPy',
+-            'Programming Language :: Python :: Implementation :: Jython',
+-            'Topic :: Software Development :: Libraries :: Python Modules',
+-            'Topic :: Text Processing :: Markup',
+-            'Typing :: Typed',
+-    ],
+-    keywords='yaml 1.2 parser round-trip preserve quotes order config',
+-    read_the_docs='yaml',
+-    supported=[(2, 7), (3, 5)],  # minimum
+-    tox=dict(
+-        env='*',  # remove 'pn', no longer test narrow Python 2.7 for unicode patterns and PyPy
+-        deps='ruamel.std.pathlib',
+-        fl8excl='_test/lib',
+-    ),
+-    universal=True,
+-    rtfd='yaml',
+-)  # type: Dict[Any, Any]
+-
+-
+-version_info = _package_data['version_info']
+-__version__ = _package_data['__version__']
+-
+-try:
+-    from .cyaml import *  # NOQA
+-
+-    __with_libyaml__ = True
+-except (ImportError, ValueError):  # for Jython
+-    __with_libyaml__ = False
+-
+-from dynaconf.vendor.ruamel.yaml.main import *  # NOQA
+diff --git a/dynaconf/vendor_src/ruamel/yaml/anchor.py b/dynaconf/vendor_src/ruamel/yaml/anchor.py
+deleted file mode 100644
+index aa649f5..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/anchor.py
++++ /dev/null
+@@ -1,20 +0,0 @@
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List, Union, Optional, Iterator  # NOQA
+-
+-anchor_attrib = '_yaml_anchor'
+-
+-
+-class Anchor(object):
+-    __slots__ = 'value', 'always_dump'
+-    attrib = anchor_attrib
+-
+-    def __init__(self):
+-        # type: () -> None
+-        self.value = None
+-        self.always_dump = False
+-
+-    def __repr__(self):
+-        # type: () -> Any
+-        ad = ', (always dump)' if self.always_dump else ""
+-        return 'Anchor({!r}{})'.format(self.value, ad)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/comments.py b/dynaconf/vendor_src/ruamel/yaml/comments.py
+deleted file mode 100644
+index 1ca210a..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/comments.py
++++ /dev/null
+@@ -1,1149 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import, print_function
+-
+-"""
+-stuff to deal with comments and formatting on dict/list/ordereddict/set
+-these are not really related, formatting could be factored out as
+-a separate base
+-"""
+-
+-import sys
+-import copy
+-
+-
+-from .compat import ordereddict  # type: ignore
+-from .compat import PY2, string_types, MutableSliceableSequence
+-from .scalarstring import ScalarString
+-from .anchor import Anchor
+-
+-if PY2:
+-    from collections import MutableSet, Sized, Set, Mapping
+-else:
+-    from collections.abc import MutableSet, Sized, Set, Mapping
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List, Union, Optional, Iterator  # NOQA
+-
+-# fmt: off
+-__all__ = ['CommentedSeq', 'CommentedKeySeq',
+-           'CommentedMap', 'CommentedOrderedMap',
+-           'CommentedSet', 'comment_attrib', 'merge_attrib']
+-# fmt: on
+-
+-comment_attrib = '_yaml_comment'
+-format_attrib = '_yaml_format'
+-line_col_attrib = '_yaml_line_col'
+-merge_attrib = '_yaml_merge'
+-tag_attrib = '_yaml_tag'
+-
+-
+-class Comment(object):
+-    # sys.getsize tested the Comment objects, __slots__ makes them bigger
+-    # and adding self.end did not matter
+-    __slots__ = 'comment', '_items', '_end', '_start'
+-    attrib = comment_attrib
+-
+-    def __init__(self):
+-        # type: () -> None
+-        self.comment = None  # [post, [pre]]
+-        # map key (mapping/omap/dict) or index (sequence/list) to a  list of
+-        # dict: post_key, pre_key, post_value, pre_value
+-        # list: pre item, post item
+-        self._items = {}  # type: Dict[Any, Any]
+-        # self._start = [] # should not put these on first item
+-        self._end = []  # type: List[Any] # end of document comments
+-
+-    def __str__(self):
+-        # type: () -> str
+-        if bool(self._end):
+-            end = ',\n  end=' + str(self._end)
+-        else:
+-            end = ""
+-        return 'Comment(comment={0},\n  items={1}{2})'.format(self.comment, self._items, end)
+-
+-    @property
+-    def items(self):
+-        # type: () -> Any
+-        return self._items
+-
+-    @property
+-    def end(self):
+-        # type: () -> Any
+-        return self._end
+-
+-    @end.setter
+-    def end(self, value):
+-        # type: (Any) -> None
+-        self._end = value
+-
+-    @property
+-    def start(self):
+-        # type: () -> Any
+-        return self._start
+-
+-    @start.setter
+-    def start(self, value):
+-        # type: (Any) -> None
+-        self._start = value
+-
+-
+-# to distinguish key from None
+-def NoComment():
+-    # type: () -> None
+-    pass
+-
+-
+-class Format(object):
+-    __slots__ = ('_flow_style',)
+-    attrib = format_attrib
+-
+-    def __init__(self):
+-        # type: () -> None
+-        self._flow_style = None  # type: Any
+-
+-    def set_flow_style(self):
+-        # type: () -> None
+-        self._flow_style = True
+-
+-    def set_block_style(self):
+-        # type: () -> None
+-        self._flow_style = False
+-
+-    def flow_style(self, default=None):
+-        # type: (Optional[Any]) -> Any
+-        """if default (the flow_style) is None, the flow style tacked on to
+-        the object explicitly will be taken. If that is None as well the
+-        default flow style rules the format down the line, or the type
+-        of the constituent values (simple -> flow, map/list -> block)"""
+-        if self._flow_style is None:
+-            return default
+-        return self._flow_style
+-
+-
+-class LineCol(object):
+-    attrib = line_col_attrib
+-
+-    def __init__(self):
+-        # type: () -> None
+-        self.line = None
+-        self.col = None
+-        self.data = None  # type: Optional[Dict[Any, Any]]
+-
+-    def add_kv_line_col(self, key, data):
+-        # type: (Any, Any) -> None
+-        if self.data is None:
+-            self.data = {}
+-        self.data[key] = data
+-
+-    def key(self, k):
+-        # type: (Any) -> Any
+-        return self._kv(k, 0, 1)
+-
+-    def value(self, k):
+-        # type: (Any) -> Any
+-        return self._kv(k, 2, 3)
+-
+-    def _kv(self, k, x0, x1):
+-        # type: (Any, Any, Any) -> Any
+-        if self.data is None:
+-            return None
+-        data = self.data[k]
+-        return data[x0], data[x1]
+-
+-    def item(self, idx):
+-        # type: (Any) -> Any
+-        if self.data is None:
+-            return None
+-        return self.data[idx][0], self.data[idx][1]
+-
+-    def add_idx_line_col(self, key, data):
+-        # type: (Any, Any) -> None
+-        if self.data is None:
+-            self.data = {}
+-        self.data[key] = data
+-
+-
+-class Tag(object):
+-    """store tag information for roundtripping"""
+-
+-    __slots__ = ('value',)
+-    attrib = tag_attrib
+-
+-    def __init__(self):
+-        # type: () -> None
+-        self.value = None
+-
+-    def __repr__(self):
+-        # type: () -> Any
+-        return '{0.__class__.__name__}({0.value!r})'.format(self)
+-
+-
+-class CommentedBase(object):
+-    @property
+-    def ca(self):
+-        # type: () -> Any
+-        if not hasattr(self, Comment.attrib):
+-            setattr(self, Comment.attrib, Comment())
+-        return getattr(self, Comment.attrib)
+-
+-    def yaml_end_comment_extend(self, comment, clear=False):
+-        # type: (Any, bool) -> None
+-        if comment is None:
+-            return
+-        if clear or self.ca.end is None:
+-            self.ca.end = []
+-        self.ca.end.extend(comment)
+-
+-    def yaml_key_comment_extend(self, key, comment, clear=False):
+-        # type: (Any, Any, bool) -> None
+-        r = self.ca._items.setdefault(key, [None, None, None, None])
+-        if clear or r[1] is None:
+-            if comment[1] is not None:
+-                assert isinstance(comment[1], list)
+-            r[1] = comment[1]
+-        else:
+-            r[1].extend(comment[0])
+-        r[0] = comment[0]
+-
+-    def yaml_value_comment_extend(self, key, comment, clear=False):
+-        # type: (Any, Any, bool) -> None
+-        r = self.ca._items.setdefault(key, [None, None, None, None])
+-        if clear or r[3] is None:
+-            if comment[1] is not None:
+-                assert isinstance(comment[1], list)
+-            r[3] = comment[1]
+-        else:
+-            r[3].extend(comment[0])
+-        r[2] = comment[0]
+-
+-    def yaml_set_start_comment(self, comment, indent=0):
+-        # type: (Any, Any) -> None
+-        """overwrites any preceding comment lines on an object
+-        expects comment to be without `#` and possible have multiple lines
+-        """
+-        from .error import CommentMark
+-        from .tokens import CommentToken
+-
+-        pre_comments = self._yaml_get_pre_comment()
+-        if comment[-1] == '\n':
+-            comment = comment[:-1]  # strip final newline if there
+-        start_mark = CommentMark(indent)
+-        for com in comment.split('\n'):
+-            pre_comments.append(CommentToken('# ' + com + '\n', start_mark, None))
+-
+-    def yaml_set_comment_before_after_key(
+-        self, key, before=None, indent=0, after=None, after_indent=None
+-    ):
+-        # type: (Any, Any, Any, Any, Any) -> None
+-        """
+-        expects comment (before/after) to be without `#` and possible have multiple lines
+-        """
+-        from dynaconf.vendor.ruamel.yaml.error import CommentMark
+-        from dynaconf.vendor.ruamel.yaml.tokens import CommentToken
+-
+-        def comment_token(s, mark):
+-            # type: (Any, Any) -> Any
+-            # handle empty lines as having no comment
+-            return CommentToken(('# ' if s else "") + s + '\n', mark, None)
+-
+-        if after_indent is None:
+-            after_indent = indent + 2
+-        if before and (len(before) > 1) and before[-1] == '\n':
+-            before = before[:-1]  # strip final newline if there
+-        if after and after[-1] == '\n':
+-            after = after[:-1]  # strip final newline if there
+-        start_mark = CommentMark(indent)
+-        c = self.ca.items.setdefault(key, [None, [], None, None])
+-        if before == '\n':
+-            c[1].append(comment_token("", start_mark))
+-        elif before:
+-            for com in before.split('\n'):
+-                c[1].append(comment_token(com, start_mark))
+-        if after:
+-            start_mark = CommentMark(after_indent)
+-            if c[3] is None:
+-                c[3] = []
+-            for com in after.split('\n'):
+-                c[3].append(comment_token(com, start_mark))  # type: ignore
+-
+-    @property
+-    def fa(self):
+-        # type: () -> Any
+-        """format attribute
+-
+-        set_flow_style()/set_block_style()"""
+-        if not hasattr(self, Format.attrib):
+-            setattr(self, Format.attrib, Format())
+-        return getattr(self, Format.attrib)
+-
+-    def yaml_add_eol_comment(self, comment, key=NoComment, column=None):
+-        # type: (Any, Optional[Any], Optional[Any]) -> None
+-        """
+-        there is a problem as eol comments should start with ' #'
+-        (but at the beginning of the line the space doesn't have to be before
+-        the #. The column index is for the # mark
+-        """
+-        from .tokens import CommentToken
+-        from .error import CommentMark
+-
+-        if column is None:
+-            try:
+-                column = self._yaml_get_column(key)
+-            except AttributeError:
+-                column = 0
+-        if comment[0] != '#':
+-            comment = '# ' + comment
+-        if column is None:
+-            if comment[0] == '#':
+-                comment = ' ' + comment
+-                column = 0
+-        start_mark = CommentMark(column)
+-        ct = [CommentToken(comment, start_mark, None), None]
+-        self._yaml_add_eol_comment(ct, key=key)
+-
+-    @property
+-    def lc(self):
+-        # type: () -> Any
+-        if not hasattr(self, LineCol.attrib):
+-            setattr(self, LineCol.attrib, LineCol())
+-        return getattr(self, LineCol.attrib)
+-
+-    def _yaml_set_line_col(self, line, col):
+-        # type: (Any, Any) -> None
+-        self.lc.line = line
+-        self.lc.col = col
+-
+-    def _yaml_set_kv_line_col(self, key, data):
+-        # type: (Any, Any) -> None
+-        self.lc.add_kv_line_col(key, data)
+-
+-    def _yaml_set_idx_line_col(self, key, data):
+-        # type: (Any, Any) -> None
+-        self.lc.add_idx_line_col(key, data)
+-
+-    @property
+-    def anchor(self):
+-        # type: () -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            setattr(self, Anchor.attrib, Anchor())
+-        return getattr(self, Anchor.attrib)
+-
+-    def yaml_anchor(self):
+-        # type: () -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            return None
+-        return self.anchor
+-
+-    def yaml_set_anchor(self, value, always_dump=False):
+-        # type: (Any, bool) -> None
+-        self.anchor.value = value
+-        self.anchor.always_dump = always_dump
+-
+-    @property
+-    def tag(self):
+-        # type: () -> Any
+-        if not hasattr(self, Tag.attrib):
+-            setattr(self, Tag.attrib, Tag())
+-        return getattr(self, Tag.attrib)
+-
+-    def yaml_set_tag(self, value):
+-        # type: (Any) -> None
+-        self.tag.value = value
+-
+-    def copy_attributes(self, t, memo=None):
+-        # type: (Any, Any) -> None
+-        # fmt: off
+-        for a in [Comment.attrib, Format.attrib, LineCol.attrib, Anchor.attrib,
+-                  Tag.attrib, merge_attrib]:
+-            if hasattr(self, a):
+-                if memo is not None:
+-                    setattr(t, a, copy.deepcopy(getattr(self, a, memo)))
+-                else:
+-                    setattr(t, a, getattr(self, a))
+-        # fmt: on
+-
+-    def _yaml_add_eol_comment(self, comment, key):
+-        # type: (Any, Any) -> None
+-        raise NotImplementedError
+-
+-    def _yaml_get_pre_comment(self):
+-        # type: () -> Any
+-        raise NotImplementedError
+-
+-    def _yaml_get_column(self, key):
+-        # type: (Any) -> Any
+-        raise NotImplementedError
+-
+-
+-class CommentedSeq(MutableSliceableSequence, list, CommentedBase):  # type: ignore
+-    __slots__ = (Comment.attrib, '_lst')
+-
+-    def __init__(self, *args, **kw):
+-        # type: (Any, Any) -> None
+-        list.__init__(self, *args, **kw)
+-
+-    def __getsingleitem__(self, idx):
+-        # type: (Any) -> Any
+-        return list.__getitem__(self, idx)
+-
+-    def __setsingleitem__(self, idx, value):
+-        # type: (Any, Any) -> None
+-        # try to preserve the scalarstring type if setting an existing key to a new value
+-        if idx < len(self):
+-            if (
+-                isinstance(value, string_types)
+-                and not isinstance(value, ScalarString)
+-                and isinstance(self[idx], ScalarString)
+-            ):
+-                value = type(self[idx])(value)
+-        list.__setitem__(self, idx, value)
+-
+-    def __delsingleitem__(self, idx=None):
+-        # type: (Any) -> Any
+-        list.__delitem__(self, idx)
+-        self.ca.items.pop(idx, None)  # might not be there -> default value
+-        for list_index in sorted(self.ca.items):
+-            if list_index < idx:
+-                continue
+-            self.ca.items[list_index - 1] = self.ca.items.pop(list_index)
+-
+-    def __len__(self):
+-        # type: () -> int
+-        return list.__len__(self)
+-
+-    def insert(self, idx, val):
+-        # type: (Any, Any) -> None
+-        """the comments after the insertion have to move forward"""
+-        list.insert(self, idx, val)
+-        for list_index in sorted(self.ca.items, reverse=True):
+-            if list_index < idx:
+-                break
+-            self.ca.items[list_index + 1] = self.ca.items.pop(list_index)
+-
+-    def extend(self, val):
+-        # type: (Any) -> None
+-        list.extend(self, val)
+-
+-    def __eq__(self, other):
+-        # type: (Any) -> bool
+-        return list.__eq__(self, other)
+-
+-    def _yaml_add_comment(self, comment, key=NoComment):
+-        # type: (Any, Optional[Any]) -> None
+-        if key is not NoComment:
+-            self.yaml_key_comment_extend(key, comment)
+-        else:
+-            self.ca.comment = comment
+-
+-    def _yaml_add_eol_comment(self, comment, key):
+-        # type: (Any, Any) -> None
+-        self._yaml_add_comment(comment, key=key)
+-
+-    def _yaml_get_columnX(self, key):
+-        # type: (Any) -> Any
+-        return self.ca.items[key][0].start_mark.column
+-
+-    def _yaml_get_column(self, key):
+-        # type: (Any) -> Any
+-        column = None
+-        sel_idx = None
+-        pre, post = key - 1, key + 1
+-        if pre in self.ca.items:
+-            sel_idx = pre
+-        elif post in self.ca.items:
+-            sel_idx = post
+-        else:
+-            # self.ca.items is not ordered
+-            for row_idx, _k1 in enumerate(self):
+-                if row_idx >= key:
+-                    break
+-                if row_idx not in self.ca.items:
+-                    continue
+-                sel_idx = row_idx
+-        if sel_idx is not None:
+-            column = self._yaml_get_columnX(sel_idx)
+-        return column
+-
+-    def _yaml_get_pre_comment(self):
+-        # type: () -> Any
+-        pre_comments = []  # type: List[Any]
+-        if self.ca.comment is None:
+-            self.ca.comment = [None, pre_comments]
+-        else:
+-            self.ca.comment[1] = pre_comments
+-        return pre_comments
+-
+-    def __deepcopy__(self, memo):
+-        # type: (Any) -> Any
+-        res = self.__class__()
+-        memo[id(self)] = res
+-        for k in self:
+-            res.append(copy.deepcopy(k, memo))
+-            self.copy_attributes(res, memo=memo)
+-        return res
+-
+-    def __add__(self, other):
+-        # type: (Any) -> Any
+-        return list.__add__(self, other)
+-
+-    def sort(self, key=None, reverse=False):  # type: ignore
+-        # type: (Any, bool) -> None
+-        if key is None:
+-            tmp_lst = sorted(zip(self, range(len(self))), reverse=reverse)
+-            list.__init__(self, [x[0] for x in tmp_lst])
+-        else:
+-            tmp_lst = sorted(
+-                zip(map(key, list.__iter__(self)), range(len(self))), reverse=reverse
+-            )
+-            list.__init__(self, [list.__getitem__(self, x[1]) for x in tmp_lst])
+-        itm = self.ca.items
+-        self.ca._items = {}
+-        for idx, x in enumerate(tmp_lst):
+-            old_index = x[1]
+-            if old_index in itm:
+-                self.ca.items[idx] = itm[old_index]
+-
+-    def __repr__(self):
+-        # type: () -> Any
+-        return list.__repr__(self)
+-
+-
+-class CommentedKeySeq(tuple, CommentedBase):  # type: ignore
+-    """This primarily exists to be able to roundtrip keys that are sequences"""
+-
+-    def _yaml_add_comment(self, comment, key=NoComment):
+-        # type: (Any, Optional[Any]) -> None
+-        if key is not NoComment:
+-            self.yaml_key_comment_extend(key, comment)
+-        else:
+-            self.ca.comment = comment
+-
+-    def _yaml_add_eol_comment(self, comment, key):
+-        # type: (Any, Any) -> None
+-        self._yaml_add_comment(comment, key=key)
+-
+-    def _yaml_get_columnX(self, key):
+-        # type: (Any) -> Any
+-        return self.ca.items[key][0].start_mark.column
+-
+-    def _yaml_get_column(self, key):
+-        # type: (Any) -> Any
+-        column = None
+-        sel_idx = None
+-        pre, post = key - 1, key + 1
+-        if pre in self.ca.items:
+-            sel_idx = pre
+-        elif post in self.ca.items:
+-            sel_idx = post
+-        else:
+-            # self.ca.items is not ordered
+-            for row_idx, _k1 in enumerate(self):
+-                if row_idx >= key:
+-                    break
+-                if row_idx not in self.ca.items:
+-                    continue
+-                sel_idx = row_idx
+-        if sel_idx is not None:
+-            column = self._yaml_get_columnX(sel_idx)
+-        return column
+-
+-    def _yaml_get_pre_comment(self):
+-        # type: () -> Any
+-        pre_comments = []  # type: List[Any]
+-        if self.ca.comment is None:
+-            self.ca.comment = [None, pre_comments]
+-        else:
+-            self.ca.comment[1] = pre_comments
+-        return pre_comments
+-
+-
+-class CommentedMapView(Sized):
+-    __slots__ = ('_mapping',)
+-
+-    def __init__(self, mapping):
+-        # type: (Any) -> None
+-        self._mapping = mapping
+-
+-    def __len__(self):
+-        # type: () -> int
+-        count = len(self._mapping)
+-        return count
+-
+-
+-class CommentedMapKeysView(CommentedMapView, Set):  # type: ignore
+-    __slots__ = ()
+-
+-    @classmethod
+-    def _from_iterable(self, it):
+-        # type: (Any) -> Any
+-        return set(it)
+-
+-    def __contains__(self, key):
+-        # type: (Any) -> Any
+-        return key in self._mapping
+-
+-    def __iter__(self):
+-        # type: () -> Any  # yield from self._mapping  # not in py27, pypy
+-        # for x in self._mapping._keys():
+-        for x in self._mapping:
+-            yield x
+-
+-
+-class CommentedMapItemsView(CommentedMapView, Set):  # type: ignore
+-    __slots__ = ()
+-
+-    @classmethod
+-    def _from_iterable(self, it):
+-        # type: (Any) -> Any
+-        return set(it)
+-
+-    def __contains__(self, item):
+-        # type: (Any) -> Any
+-        key, value = item
+-        try:
+-            v = self._mapping[key]
+-        except KeyError:
+-            return False
+-        else:
+-            return v == value
+-
+-    def __iter__(self):
+-        # type: () -> Any
+-        for key in self._mapping._keys():
+-            yield (key, self._mapping[key])
+-
+-
+-class CommentedMapValuesView(CommentedMapView):
+-    __slots__ = ()
+-
+-    def __contains__(self, value):
+-        # type: (Any) -> Any
+-        for key in self._mapping:
+-            if value == self._mapping[key]:
+-                return True
+-        return False
+-
+-    def __iter__(self):
+-        # type: () -> Any
+-        for key in self._mapping._keys():
+-            yield self._mapping[key]
+-
+-
+-class CommentedMap(ordereddict, CommentedBase):  # type: ignore
+-    __slots__ = (Comment.attrib, '_ok', '_ref')
+-
+-    def __init__(self, *args, **kw):
+-        # type: (Any, Any) -> None
+-        self._ok = set()  # type: MutableSet[Any]  #  own keys
+-        self._ref = []  # type: List[CommentedMap]
+-        ordereddict.__init__(self, *args, **kw)
+-
+-    def _yaml_add_comment(self, comment, key=NoComment, value=NoComment):
+-        # type: (Any, Optional[Any], Optional[Any]) -> None
+-        """values is set to key to indicate a value attachment of comment"""
+-        if key is not NoComment:
+-            self.yaml_key_comment_extend(key, comment)
+-            return
+-        if value is not NoComment:
+-            self.yaml_value_comment_extend(value, comment)
+-        else:
+-            self.ca.comment = comment
+-
+-    def _yaml_add_eol_comment(self, comment, key):
+-        # type: (Any, Any) -> None
+-        """add on the value line, with value specified by the key"""
+-        self._yaml_add_comment(comment, value=key)
+-
+-    def _yaml_get_columnX(self, key):
+-        # type: (Any) -> Any
+-        return self.ca.items[key][2].start_mark.column
+-
+-    def _yaml_get_column(self, key):
+-        # type: (Any) -> Any
+-        column = None
+-        sel_idx = None
+-        pre, post, last = None, None, None
+-        for x in self:
+-            if pre is not None and x != key:
+-                post = x
+-                break
+-            if x == key:
+-                pre = last
+-            last = x
+-        if pre in self.ca.items:
+-            sel_idx = pre
+-        elif post in self.ca.items:
+-            sel_idx = post
+-        else:
+-            # self.ca.items is not ordered
+-            for k1 in self:
+-                if k1 >= key:
+-                    break
+-                if k1 not in self.ca.items:
+-                    continue
+-                sel_idx = k1
+-        if sel_idx is not None:
+-            column = self._yaml_get_columnX(sel_idx)
+-        return column
+-
+-    def _yaml_get_pre_comment(self):
+-        # type: () -> Any
+-        pre_comments = []  # type: List[Any]
+-        if self.ca.comment is None:
+-            self.ca.comment = [None, pre_comments]
+-        else:
+-            self.ca.comment[1] = pre_comments
+-        return pre_comments
+-
+-    def update(self, vals):
+-        # type: (Any) -> None
+-        try:
+-            ordereddict.update(self, vals)
+-        except TypeError:
+-            # probably a dict that is used
+-            for x in vals:
+-                self[x] = vals[x]
+-        try:
+-            self._ok.update(vals.keys())  # type: ignore
+-        except AttributeError:
+-            # assume a list/tuple of two element lists/tuples
+-            for x in vals:
+-                self._ok.add(x[0])
+-
+-    def insert(self, pos, key, value, comment=None):
+-        # type: (Any, Any, Any, Optional[Any]) -> None
+-        """insert key value into given position
+-        attach comment if provided
+-        """
+-        ordereddict.insert(self, pos, key, value)
+-        self._ok.add(key)
+-        if comment is not None:
+-            self.yaml_add_eol_comment(comment, key=key)
+-
+-    def mlget(self, key, default=None, list_ok=False):
+-        # type: (Any, Any, Any) -> Any
+-        """multi-level get that expects dicts within dicts"""
+-        if not isinstance(key, list):
+-            return self.get(key, default)
+-        # assume that the key is a list of recursively accessible dicts
+-
+-        def get_one_level(key_list, level, d):
+-            # type: (Any, Any, Any) -> Any
+-            if not list_ok:
+-                assert isinstance(d, dict)
+-            if level >= len(key_list):
+-                if level > len(key_list):
+-                    raise IndexError
+-                return d[key_list[level - 1]]
+-            return get_one_level(key_list, level + 1, d[key_list[level - 1]])
+-
+-        try:
+-            return get_one_level(key, 1, self)
+-        except KeyError:
+-            return default
+-        except (TypeError, IndexError):
+-            if not list_ok:
+-                raise
+-            return default
+-
+-    def __getitem__(self, key):
+-        # type: (Any) -> Any
+-        try:
+-            return ordereddict.__getitem__(self, key)
+-        except KeyError:
+-            for merged in getattr(self, merge_attrib, []):
+-                if key in merged[1]:
+-                    return merged[1][key]
+-            raise
+-
+-    def __setitem__(self, key, value):
+-        # type: (Any, Any) -> None
+-        # try to preserve the scalarstring type if setting an existing key to a new value
+-        if key in self:
+-            if (
+-                isinstance(value, string_types)
+-                and not isinstance(value, ScalarString)
+-                and isinstance(self[key], ScalarString)
+-            ):
+-                value = type(self[key])(value)
+-        ordereddict.__setitem__(self, key, value)
+-        self._ok.add(key)
+-
+-    def _unmerged_contains(self, key):
+-        # type: (Any) -> Any
+-        if key in self._ok:
+-            return True
+-        return None
+-
+-    def __contains__(self, key):
+-        # type: (Any) -> bool
+-        return bool(ordereddict.__contains__(self, key))
+-
+-    def get(self, key, default=None):
+-        # type: (Any, Any) -> Any
+-        try:
+-            return self.__getitem__(key)
+-        except:  # NOQA
+-            return default
+-
+-    def __repr__(self):
+-        # type: () -> Any
+-        return ordereddict.__repr__(self).replace('CommentedMap', 'ordereddict')
+-
+-    def non_merged_items(self):
+-        # type: () -> Any
+-        for x in ordereddict.__iter__(self):
+-            if x in self._ok:
+-                yield x, ordereddict.__getitem__(self, x)
+-
+-    def __delitem__(self, key):
+-        # type: (Any) -> None
+-        # for merged in getattr(self, merge_attrib, []):
+-        #     if key in merged[1]:
+-        #         value = merged[1][key]
+-        #         break
+-        # else:
+-        #     # not found in merged in stuff
+-        #     ordereddict.__delitem__(self, key)
+-        #    for referer in self._ref:
+-        #        referer.update_key_value(key)
+-        #    return
+-        #
+-        # ordereddict.__setitem__(self, key, value)  # merge might have different value
+-        # self._ok.discard(key)
+-        self._ok.discard(key)
+-        ordereddict.__delitem__(self, key)
+-        for referer in self._ref:
+-            referer.update_key_value(key)
+-
+-    def __iter__(self):
+-        # type: () -> Any
+-        for x in ordereddict.__iter__(self):
+-            yield x
+-
+-    def _keys(self):
+-        # type: () -> Any
+-        for x in ordereddict.__iter__(self):
+-            yield x
+-
+-    def __len__(self):
+-        # type: () -> int
+-        return int(ordereddict.__len__(self))
+-
+-    def __eq__(self, other):
+-        # type: (Any) -> bool
+-        return bool(dict(self) == other)
+-
+-    if PY2:
+-
+-        def keys(self):
+-            # type: () -> Any
+-            return list(self._keys())
+-
+-        def iterkeys(self):
+-            # type: () -> Any
+-            return self._keys()
+-
+-        def viewkeys(self):
+-            # type: () -> Any
+-            return CommentedMapKeysView(self)
+-
+-    else:
+-
+-        def keys(self):
+-            # type: () -> Any
+-            return CommentedMapKeysView(self)
+-
+-    if PY2:
+-
+-        def _values(self):
+-            # type: () -> Any
+-            for x in ordereddict.__iter__(self):
+-                yield ordereddict.__getitem__(self, x)
+-
+-        def values(self):
+-            # type: () -> Any
+-            return list(self._values())
+-
+-        def itervalues(self):
+-            # type: () -> Any
+-            return self._values()
+-
+-        def viewvalues(self):
+-            # type: () -> Any
+-            return CommentedMapValuesView(self)
+-
+-    else:
+-
+-        def values(self):
+-            # type: () -> Any
+-            return CommentedMapValuesView(self)
+-
+-    def _items(self):
+-        # type: () -> Any
+-        for x in ordereddict.__iter__(self):
+-            yield x, ordereddict.__getitem__(self, x)
+-
+-    if PY2:
+-
+-        def items(self):
+-            # type: () -> Any
+-            return list(self._items())
+-
+-        def iteritems(self):
+-            # type: () -> Any
+-            return self._items()
+-
+-        def viewitems(self):
+-            # type: () -> Any
+-            return CommentedMapItemsView(self)
+-
+-    else:
+-
+-        def items(self):
+-            # type: () -> Any
+-            return CommentedMapItemsView(self)
+-
+-    @property
+-    def merge(self):
+-        # type: () -> Any
+-        if not hasattr(self, merge_attrib):
+-            setattr(self, merge_attrib, [])
+-        return getattr(self, merge_attrib)
+-
+-    def copy(self):
+-        # type: () -> Any
+-        x = type(self)()  # update doesn't work
+-        for k, v in self._items():
+-            x[k] = v
+-        self.copy_attributes(x)
+-        return x
+-
+-    def add_referent(self, cm):
+-        # type: (Any) -> None
+-        if cm not in self._ref:
+-            self._ref.append(cm)
+-
+-    def add_yaml_merge(self, value):
+-        # type: (Any) -> None
+-        for v in value:
+-            v[1].add_referent(self)
+-            for k, v in v[1].items():
+-                if ordereddict.__contains__(self, k):
+-                    continue
+-                ordereddict.__setitem__(self, k, v)
+-        self.merge.extend(value)
+-
+-    def update_key_value(self, key):
+-        # type: (Any) -> None
+-        if key in self._ok:
+-            return
+-        for v in self.merge:
+-            if key in v[1]:
+-                ordereddict.__setitem__(self, key, v[1][key])
+-                return
+-        ordereddict.__delitem__(self, key)
+-
+-    def __deepcopy__(self, memo):
+-        # type: (Any) -> Any
+-        res = self.__class__()
+-        memo[id(self)] = res
+-        for k in self:
+-            res[k] = copy.deepcopy(self[k], memo)
+-        self.copy_attributes(res, memo=memo)
+-        return res
+-
+-
+-# based on brownie mappings
+-@classmethod  # type: ignore
+-def raise_immutable(cls, *args, **kwargs):
+-    # type: (Any, *Any, **Any) -> None
+-    raise TypeError('{} objects are immutable'.format(cls.__name__))
+-
+-
+-class CommentedKeyMap(CommentedBase, Mapping):  # type: ignore
+-    __slots__ = Comment.attrib, '_od'
+-    """This primarily exists to be able to roundtrip keys that are mappings"""
+-
+-    def __init__(self, *args, **kw):
+-        # type: (Any, Any) -> None
+-        if hasattr(self, '_od'):
+-            raise_immutable(self)
+-        try:
+-            self._od = ordereddict(*args, **kw)
+-        except TypeError:
+-            if PY2:
+-                self._od = ordereddict(args[0].items())
+-            else:
+-                raise
+-
+-    __delitem__ = __setitem__ = clear = pop = popitem = setdefault = update = raise_immutable
+-
+-    # need to implement __getitem__, __iter__ and __len__
+-    def __getitem__(self, index):
+-        # type: (Any) -> Any
+-        return self._od[index]
+-
+-    def __iter__(self):
+-        # type: () -> Iterator[Any]
+-        for x in self._od.__iter__():
+-            yield x
+-
+-    def __len__(self):
+-        # type: () -> int
+-        return len(self._od)
+-
+-    def __hash__(self):
+-        # type: () -> Any
+-        return hash(tuple(self.items()))
+-
+-    def __repr__(self):
+-        # type: () -> Any
+-        if not hasattr(self, merge_attrib):
+-            return self._od.__repr__()
+-        return 'ordereddict(' + repr(list(self._od.items())) + ')'
+-
+-    @classmethod
+-    def fromkeys(keys, v=None):
+-        # type: (Any, Any) -> Any
+-        return CommentedKeyMap(dict.fromkeys(keys, v))
+-
+-    def _yaml_add_comment(self, comment, key=NoComment):
+-        # type: (Any, Optional[Any]) -> None
+-        if key is not NoComment:
+-            self.yaml_key_comment_extend(key, comment)
+-        else:
+-            self.ca.comment = comment
+-
+-    def _yaml_add_eol_comment(self, comment, key):
+-        # type: (Any, Any) -> None
+-        self._yaml_add_comment(comment, key=key)
+-
+-    def _yaml_get_columnX(self, key):
+-        # type: (Any) -> Any
+-        return self.ca.items[key][0].start_mark.column
+-
+-    def _yaml_get_column(self, key):
+-        # type: (Any) -> Any
+-        column = None
+-        sel_idx = None
+-        pre, post = key - 1, key + 1
+-        if pre in self.ca.items:
+-            sel_idx = pre
+-        elif post in self.ca.items:
+-            sel_idx = post
+-        else:
+-            # self.ca.items is not ordered
+-            for row_idx, _k1 in enumerate(self):
+-                if row_idx >= key:
+-                    break
+-                if row_idx not in self.ca.items:
+-                    continue
+-                sel_idx = row_idx
+-        if sel_idx is not None:
+-            column = self._yaml_get_columnX(sel_idx)
+-        return column
+-
+-    def _yaml_get_pre_comment(self):
+-        # type: () -> Any
+-        pre_comments = []  # type: List[Any]
+-        if self.ca.comment is None:
+-            self.ca.comment = [None, pre_comments]
+-        else:
+-            self.ca.comment[1] = pre_comments
+-        return pre_comments
+-
+-
+-class CommentedOrderedMap(CommentedMap):
+-    __slots__ = (Comment.attrib,)
+-
+-
+-class CommentedSet(MutableSet, CommentedBase):  # type: ignore  # NOQA
+-    __slots__ = Comment.attrib, 'odict'
+-
+-    def __init__(self, values=None):
+-        # type: (Any) -> None
+-        self.odict = ordereddict()
+-        MutableSet.__init__(self)
+-        if values is not None:
+-            self |= values  # type: ignore
+-
+-    def _yaml_add_comment(self, comment, key=NoComment, value=NoComment):
+-        # type: (Any, Optional[Any], Optional[Any]) -> None
+-        """values is set to key to indicate a value attachment of comment"""
+-        if key is not NoComment:
+-            self.yaml_key_comment_extend(key, comment)
+-            return
+-        if value is not NoComment:
+-            self.yaml_value_comment_extend(value, comment)
+-        else:
+-            self.ca.comment = comment
+-
+-    def _yaml_add_eol_comment(self, comment, key):
+-        # type: (Any, Any) -> None
+-        """add on the value line, with value specified by the key"""
+-        self._yaml_add_comment(comment, value=key)
+-
+-    def add(self, value):
+-        # type: (Any) -> None
+-        """Add an element."""
+-        self.odict[value] = None
+-
+-    def discard(self, value):
+-        # type: (Any) -> None
+-        """Remove an element.  Do not raise an exception if absent."""
+-        del self.odict[value]
+-
+-    def __contains__(self, x):
+-        # type: (Any) -> Any
+-        return x in self.odict
+-
+-    def __iter__(self):
+-        # type: () -> Any
+-        for x in self.odict:
+-            yield x
+-
+-    def __len__(self):
+-        # type: () -> int
+-        return len(self.odict)
+-
+-    def __repr__(self):
+-        # type: () -> str
+-        return 'set({0!r})'.format(self.odict.keys())
+-
+-
+-class TaggedScalar(CommentedBase):
+-    # the value and style attributes are set during roundtrip construction
+-    def __init__(self, value=None, style=None, tag=None):
+-        # type: (Any, Any, Any) -> None
+-        self.value = value
+-        self.style = style
+-        if tag is not None:
+-            self.yaml_set_tag(tag)
+-
+-    def __str__(self):
+-        # type: () -> Any
+-        return self.value
+-
+-
+-def dump_comments(d, name="", sep='.', out=sys.stdout):
+-    # type: (Any, str, str, Any) -> None
+-    """
+-    recursively dump comments, all but the toplevel preceded by the path
+-    in dotted form x.0.a
+-    """
+-    if isinstance(d, dict) and hasattr(d, 'ca'):
+-        if name:
+-            sys.stdout.write('{}\n'.format(name))
+-        out.write('{}\n'.format(d.ca))  # type: ignore
+-        for k in d:
+-            dump_comments(d[k], name=(name + sep + k) if name else k, sep=sep, out=out)
+-    elif isinstance(d, list) and hasattr(d, 'ca'):
+-        if name:
+-            sys.stdout.write('{}\n'.format(name))
+-        out.write('{}\n'.format(d.ca))  # type: ignore
+-        for idx, k in enumerate(d):
+-            dump_comments(
+-                k, name=(name + sep + str(idx)) if name else str(idx), sep=sep, out=out
+-            )
+diff --git a/dynaconf/vendor_src/ruamel/yaml/compat.py b/dynaconf/vendor_src/ruamel/yaml/compat.py
+deleted file mode 100644
+index c48cb58..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/compat.py
++++ /dev/null
+@@ -1,324 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function
+-
+-# partially from package six by Benjamin Peterson
+-
+-import sys
+-import os
+-import types
+-import traceback
+-from abc import abstractmethod
+-
+-
+-# fmt: off
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List, Union, BinaryIO, IO, Text, Tuple  # NOQA
+-    from typing import Optional  # NOQA
+-# fmt: on
+-
+-_DEFAULT_YAML_VERSION = (1, 2)
+-
+-try:
+-    from ruamel.ordereddict import ordereddict
+-except:  # NOQA
+-    try:
+-        from collections import OrderedDict
+-    except ImportError:
+-        from ordereddict import OrderedDict  # type: ignore
+-    # to get the right name import ... as ordereddict doesn't do that
+-
+-    class ordereddict(OrderedDict):  # type: ignore
+-        if not hasattr(OrderedDict, 'insert'):
+-
+-            def insert(self, pos, key, value):
+-                # type: (int, Any, Any) -> None
+-                if pos >= len(self):
+-                    self[key] = value
+-                    return
+-                od = ordereddict()
+-                od.update(self)
+-                for k in od:
+-                    del self[k]
+-                for index, old_key in enumerate(od):
+-                    if pos == index:
+-                        self[key] = value
+-                    self[old_key] = od[old_key]
+-
+-
+-PY2 = sys.version_info[0] == 2
+-PY3 = sys.version_info[0] == 3
+-
+-
+-if PY3:
+-
+-    def utf8(s):
+-        # type: (str) -> str
+-        return s
+-
+-    def to_str(s):
+-        # type: (str) -> str
+-        return s
+-
+-    def to_unicode(s):
+-        # type: (str) -> str
+-        return s
+-
+-
+-else:
+-    if False:
+-        unicode = str
+-
+-    def utf8(s):
+-        # type: (unicode) -> str
+-        return s.encode('utf-8')
+-
+-    def to_str(s):
+-        # type: (str) -> str
+-        return str(s)
+-
+-    def to_unicode(s):
+-        # type: (str) -> unicode
+-        return unicode(s)  # NOQA
+-
+-
+-if PY3:
+-    string_types = str
+-    integer_types = int
+-    class_types = type
+-    text_type = str
+-    binary_type = bytes
+-
+-    MAXSIZE = sys.maxsize
+-    unichr = chr
+-    import io
+-
+-    StringIO = io.StringIO
+-    BytesIO = io.BytesIO
+-    # have unlimited precision
+-    no_limit_int = int
+-    from collections.abc import Hashable, MutableSequence, MutableMapping, Mapping  # NOQA
+-
+-else:
+-    string_types = basestring  # NOQA
+-    integer_types = (int, long)  # NOQA
+-    class_types = (type, types.ClassType)
+-    text_type = unicode  # NOQA
+-    binary_type = str
+-
+-    # to allow importing
+-    unichr = unichr
+-    from StringIO import StringIO as _StringIO
+-
+-    StringIO = _StringIO
+-    import cStringIO
+-
+-    BytesIO = cStringIO.StringIO
+-    # have unlimited precision
+-    no_limit_int = long  # NOQA not available on Python 3
+-    from collections import Hashable, MutableSequence, MutableMapping, Mapping  # NOQA
+-
+-if False:  # MYPY
+-    # StreamType = Union[BinaryIO, IO[str], IO[unicode],  StringIO]
+-    # StreamType = Union[BinaryIO, IO[str], StringIO]  # type: ignore
+-    StreamType = Any
+-
+-    StreamTextType = StreamType  # Union[Text, StreamType]
+-    VersionType = Union[List[int], str, Tuple[int, int]]
+-
+-if PY3:
+-    builtins_module = 'builtins'
+-else:
+-    builtins_module = '__builtin__'
+-
+-UNICODE_SIZE = 4 if sys.maxunicode > 65535 else 2
+-
+-
+-def with_metaclass(meta, *bases):
+-    # type: (Any, Any) -> Any
+-    """Create a base class with a metaclass."""
+-    return meta('NewBase', bases, {})
+-
+-
+-DBG_TOKEN = 1
+-DBG_EVENT = 2
+-DBG_NODE = 4
+-
+-
+-_debug = None  # type: Optional[int]
+-if 'RUAMELDEBUG' in os.environ:
+-    _debugx = os.environ.get('RUAMELDEBUG')
+-    if _debugx is None:
+-        _debug = 0
+-    else:
+-        _debug = int(_debugx)
+-
+-
+-if bool(_debug):
+-
+-    class ObjectCounter(object):
+-        def __init__(self):
+-            # type: () -> None
+-            self.map = {}  # type: Dict[Any, Any]
+-
+-        def __call__(self, k):
+-            # type: (Any) -> None
+-            self.map[k] = self.map.get(k, 0) + 1
+-
+-        def dump(self):
+-            # type: () -> None
+-            for k in sorted(self.map):
+-                sys.stdout.write('{} -> {}'.format(k, self.map[k]))
+-
+-    object_counter = ObjectCounter()
+-
+-
+-# used from yaml util when testing
+-def dbg(val=None):
+-    # type: (Any) -> Any
+-    global _debug
+-    if _debug is None:
+-        # set to true or false
+-        _debugx = os.environ.get('YAMLDEBUG')
+-        if _debugx is None:
+-            _debug = 0
+-        else:
+-            _debug = int(_debugx)
+-    if val is None:
+-        return _debug
+-    return _debug & val
+-
+-
+-class Nprint(object):
+-    def __init__(self, file_name=None):
+-        # type: (Any) -> None
+-        self._max_print = None  # type: Any
+-        self._count = None  # type: Any
+-        self._file_name = file_name
+-
+-    def __call__(self, *args, **kw):
+-        # type: (Any, Any) -> None
+-        if not bool(_debug):
+-            return
+-        out = sys.stdout if self._file_name is None else open(self._file_name, 'a')
+-        dbgprint = print  # to fool checking for print statements by dv utility
+-        kw1 = kw.copy()
+-        kw1['file'] = out
+-        dbgprint(*args, **kw1)
+-        out.flush()
+-        if self._max_print is not None:
+-            if self._count is None:
+-                self._count = self._max_print
+-            self._count -= 1
+-            if self._count == 0:
+-                dbgprint('forced exit\n')
+-                traceback.print_stack()
+-                out.flush()
+-                sys.exit(0)
+-        if self._file_name:
+-            out.close()
+-
+-    def set_max_print(self, i):
+-        # type: (int) -> None
+-        self._max_print = i
+-        self._count = None
+-
+-
+-nprint = Nprint()
+-nprintf = Nprint('/var/tmp/ruamel.yaml.log')
+-
+-# char checkers following production rules
+-
+-
+-def check_namespace_char(ch):
+-    # type: (Any) -> bool
+-    if u'\x21' <= ch <= u'\x7E':  # ! to ~
+-        return True
+-    if u'\xA0' <= ch <= u'\uD7FF':
+-        return True
+-    if (u'\uE000' <= ch <= u'\uFFFD') and ch != u'\uFEFF':  # excl. byte order mark
+-        return True
+-    if u'\U00010000' <= ch <= u'\U0010FFFF':
+-        return True
+-    return False
+-
+-
+-def check_anchorname_char(ch):
+-    # type: (Any) -> bool
+-    if ch in u',[]{}':
+-        return False
+-    return check_namespace_char(ch)
+-
+-
+-def version_tnf(t1, t2=None):
+-    # type: (Any, Any) -> Any
+-    """
+-    return True if ruamel.yaml version_info < t1, None if t2 is specified and bigger else False
+-    """
+-    from dynaconf.vendor.ruamel.yaml import version_info  # NOQA
+-
+-    if version_info < t1:
+-        return True
+-    if t2 is not None and version_info < t2:
+-        return None
+-    return False
+-
+-
+-class MutableSliceableSequence(MutableSequence):  # type: ignore
+-    __slots__ = ()
+-
+-    def __getitem__(self, index):
+-        # type: (Any) -> Any
+-        if not isinstance(index, slice):
+-            return self.__getsingleitem__(index)
+-        return type(self)([self[i] for i in range(*index.indices(len(self)))])  # type: ignore
+-
+-    def __setitem__(self, index, value):
+-        # type: (Any, Any) -> None
+-        if not isinstance(index, slice):
+-            return self.__setsingleitem__(index, value)
+-        assert iter(value)
+-        # nprint(index.start, index.stop, index.step, index.indices(len(self)))
+-        if index.step is None:
+-            del self[index.start : index.stop]
+-            for elem in reversed(value):
+-                self.insert(0 if index.start is None else index.start, elem)
+-        else:
+-            range_parms = index.indices(len(self))
+-            nr_assigned_items = (range_parms[1] - range_parms[0] - 1) // range_parms[2] + 1
+-            # need to test before changing, in case TypeError is caught
+-            if nr_assigned_items < len(value):
+-                raise TypeError(
+-                    'too many elements in value {} < {}'.format(nr_assigned_items, len(value))
+-                )
+-            elif nr_assigned_items > len(value):
+-                raise TypeError(
+-                    'not enough elements in value {} > {}'.format(
+-                        nr_assigned_items, len(value)
+-                    )
+-                )
+-            for idx, i in enumerate(range(*range_parms)):
+-                self[i] = value[idx]
+-
+-    def __delitem__(self, index):
+-        # type: (Any) -> None
+-        if not isinstance(index, slice):
+-            return self.__delsingleitem__(index)
+-        # nprint(index.start, index.stop, index.step, index.indices(len(self)))
+-        for i in reversed(range(*index.indices(len(self)))):
+-            del self[i]
+-
+-    @abstractmethod
+-    def __getsingleitem__(self, index):
+-        # type: (Any) -> Any
+-        raise IndexError
+-
+-    @abstractmethod
+-    def __setsingleitem__(self, index, value):
+-        # type: (Any, Any) -> None
+-        raise IndexError
+-
+-    @abstractmethod
+-    def __delsingleitem__(self, index):
+-        # type: (Any) -> None
+-        raise IndexError
+diff --git a/dynaconf/vendor_src/ruamel/yaml/composer.py b/dynaconf/vendor_src/ruamel/yaml/composer.py
+deleted file mode 100644
+index 96e67a7..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/composer.py
++++ /dev/null
+@@ -1,238 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import, print_function
+-
+-import warnings
+-
+-from .error import MarkedYAMLError, ReusedAnchorWarning
+-from .compat import utf8, nprint, nprintf  # NOQA
+-
+-from .events import (
+-    StreamStartEvent,
+-    StreamEndEvent,
+-    MappingStartEvent,
+-    MappingEndEvent,
+-    SequenceStartEvent,
+-    SequenceEndEvent,
+-    AliasEvent,
+-    ScalarEvent,
+-)
+-from .nodes import MappingNode, ScalarNode, SequenceNode
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List  # NOQA
+-
+-__all__ = ['Composer', 'ComposerError']
+-
+-
+-class ComposerError(MarkedYAMLError):
+-    pass
+-
+-
+-class Composer(object):
+-    def __init__(self, loader=None):
+-        # type: (Any) -> None
+-        self.loader = loader
+-        if self.loader is not None and getattr(self.loader, '_composer', None) is None:
+-            self.loader._composer = self
+-        self.anchors = {}  # type: Dict[Any, Any]
+-
+-    @property
+-    def parser(self):
+-        # type: () -> Any
+-        if hasattr(self.loader, 'typ'):
+-            self.loader.parser
+-        return self.loader._parser
+-
+-    @property
+-    def resolver(self):
+-        # type: () -> Any
+-        # assert self.loader._resolver is not None
+-        if hasattr(self.loader, 'typ'):
+-            self.loader.resolver
+-        return self.loader._resolver
+-
+-    def check_node(self):
+-        # type: () -> Any
+-        # Drop the STREAM-START event.
+-        if self.parser.check_event(StreamStartEvent):
+-            self.parser.get_event()
+-
+-        # If there are more documents available?
+-        return not self.parser.check_event(StreamEndEvent)
+-
+-    def get_node(self):
+-        # type: () -> Any
+-        # Get the root node of the next document.
+-        if not self.parser.check_event(StreamEndEvent):
+-            return self.compose_document()
+-
+-    def get_single_node(self):
+-        # type: () -> Any
+-        # Drop the STREAM-START event.
+-        self.parser.get_event()
+-
+-        # Compose a document if the stream is not empty.
+-        document = None  # type: Any
+-        if not self.parser.check_event(StreamEndEvent):
+-            document = self.compose_document()
+-
+-        # Ensure that the stream contains no more documents.
+-        if not self.parser.check_event(StreamEndEvent):
+-            event = self.parser.get_event()
+-            raise ComposerError(
+-                'expected a single document in the stream',
+-                document.start_mark,
+-                'but found another document',
+-                event.start_mark,
+-            )
+-
+-        # Drop the STREAM-END event.
+-        self.parser.get_event()
+-
+-        return document
+-
+-    def compose_document(self):
+-        # type: (Any) -> Any
+-        # Drop the DOCUMENT-START event.
+-        self.parser.get_event()
+-
+-        # Compose the root node.
+-        node = self.compose_node(None, None)
+-
+-        # Drop the DOCUMENT-END event.
+-        self.parser.get_event()
+-
+-        self.anchors = {}
+-        return node
+-
+-    def compose_node(self, parent, index):
+-        # type: (Any, Any) -> Any
+-        if self.parser.check_event(AliasEvent):
+-            event = self.parser.get_event()
+-            alias = event.anchor
+-            if alias not in self.anchors:
+-                raise ComposerError(
+-                    None, None, 'found undefined alias %r' % utf8(alias), event.start_mark
+-                )
+-            return self.anchors[alias]
+-        event = self.parser.peek_event()
+-        anchor = event.anchor
+-        if anchor is not None:  # have an anchor
+-            if anchor in self.anchors:
+-                # raise ComposerError(
+-                #     "found duplicate anchor %r; first occurrence"
+-                #     % utf8(anchor), self.anchors[anchor].start_mark,
+-                #     "second occurrence", event.start_mark)
+-                ws = (
+-                    '\nfound duplicate anchor {!r}\nfirst occurrence {}\nsecond occurrence '
+-                    '{}'.format((anchor), self.anchors[anchor].start_mark, event.start_mark)
+-                )
+-                warnings.warn(ws, ReusedAnchorWarning)
+-        self.resolver.descend_resolver(parent, index)
+-        if self.parser.check_event(ScalarEvent):
+-            node = self.compose_scalar_node(anchor)
+-        elif self.parser.check_event(SequenceStartEvent):
+-            node = self.compose_sequence_node(anchor)
+-        elif self.parser.check_event(MappingStartEvent):
+-            node = self.compose_mapping_node(anchor)
+-        self.resolver.ascend_resolver()
+-        return node
+-
+-    def compose_scalar_node(self, anchor):
+-        # type: (Any) -> Any
+-        event = self.parser.get_event()
+-        tag = event.tag
+-        if tag is None or tag == u'!':
+-            tag = self.resolver.resolve(ScalarNode, event.value, event.implicit)
+-        node = ScalarNode(
+-            tag,
+-            event.value,
+-            event.start_mark,
+-            event.end_mark,
+-            style=event.style,
+-            comment=event.comment,
+-            anchor=anchor,
+-        )
+-        if anchor is not None:
+-            self.anchors[anchor] = node
+-        return node
+-
+-    def compose_sequence_node(self, anchor):
+-        # type: (Any) -> Any
+-        start_event = self.parser.get_event()
+-        tag = start_event.tag
+-        if tag is None or tag == u'!':
+-            tag = self.resolver.resolve(SequenceNode, None, start_event.implicit)
+-        node = SequenceNode(
+-            tag,
+-            [],
+-            start_event.start_mark,
+-            None,
+-            flow_style=start_event.flow_style,
+-            comment=start_event.comment,
+-            anchor=anchor,
+-        )
+-        if anchor is not None:
+-            self.anchors[anchor] = node
+-        index = 0
+-        while not self.parser.check_event(SequenceEndEvent):
+-            node.value.append(self.compose_node(node, index))
+-            index += 1
+-        end_event = self.parser.get_event()
+-        if node.flow_style is True and end_event.comment is not None:
+-            if node.comment is not None:
+-                nprint(
+-                    'Warning: unexpected end_event commment in sequence '
+-                    'node {}'.format(node.flow_style)
+-                )
+-            node.comment = end_event.comment
+-        node.end_mark = end_event.end_mark
+-        self.check_end_doc_comment(end_event, node)
+-        return node
+-
+-    def compose_mapping_node(self, anchor):
+-        # type: (Any) -> Any
+-        start_event = self.parser.get_event()
+-        tag = start_event.tag
+-        if tag is None or tag == u'!':
+-            tag = self.resolver.resolve(MappingNode, None, start_event.implicit)
+-        node = MappingNode(
+-            tag,
+-            [],
+-            start_event.start_mark,
+-            None,
+-            flow_style=start_event.flow_style,
+-            comment=start_event.comment,
+-            anchor=anchor,
+-        )
+-        if anchor is not None:
+-            self.anchors[anchor] = node
+-        while not self.parser.check_event(MappingEndEvent):
+-            # key_event = self.parser.peek_event()
+-            item_key = self.compose_node(node, None)
+-            # if item_key in node.value:
+-            #     raise ComposerError("while composing a mapping",
+-            #             start_event.start_mark,
+-            #             "found duplicate key", key_event.start_mark)
+-            item_value = self.compose_node(node, item_key)
+-            # node.value[item_key] = item_value
+-            node.value.append((item_key, item_value))
+-        end_event = self.parser.get_event()
+-        if node.flow_style is True and end_event.comment is not None:
+-            node.comment = end_event.comment
+-        node.end_mark = end_event.end_mark
+-        self.check_end_doc_comment(end_event, node)
+-        return node
+-
+-    def check_end_doc_comment(self, end_event, node):
+-        # type: (Any, Any) -> None
+-        if end_event.comment and end_event.comment[1]:
+-            # pre comments on an end_event, no following to move to
+-            if node.comment is None:
+-                node.comment = [None, None]
+-            assert not isinstance(node, ScalarEvent)
+-            # this is a post comment on a mapping node, add as third element
+-            # in the list
+-            node.comment.append(end_event.comment[1])
+-            end_event.comment[1] = None
+diff --git a/dynaconf/vendor_src/ruamel/yaml/configobjwalker.py b/dynaconf/vendor_src/ruamel/yaml/configobjwalker.py
+deleted file mode 100644
+index 711efbc..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/configobjwalker.py
++++ /dev/null
+@@ -1,14 +0,0 @@
+-# coding: utf-8
+-
+-import warnings
+-
+-from .util import configobj_walker as new_configobj_walker
+-
+-if False:  # MYPY
+-    from typing import Any  # NOQA
+-
+-
+-def configobj_walker(cfg):
+-    # type: (Any) -> Any
+-    warnings.warn('configobj_walker has moved to ruamel.yaml.util, please update your code')
+-    return new_configobj_walker(cfg)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/constructor.py b/dynaconf/vendor_src/ruamel/yaml/constructor.py
+deleted file mode 100644
+index 5d82ce5..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/constructor.py
++++ /dev/null
+@@ -1,1805 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function, absolute_import, division
+-
+-import datetime
+-import base64
+-import binascii
+-import re
+-import sys
+-import types
+-import warnings
+-
+-# fmt: off
+-from .error import (MarkedYAMLError, MarkedYAMLFutureWarning,
+-                               MantissaNoDotYAML1_1Warning)
+-from .nodes import *                               # NOQA
+-from .nodes import (SequenceNode, MappingNode, ScalarNode)
+-from .compat import (utf8, builtins_module, to_str, PY2, PY3,  # NOQA
+-                                text_type, nprint, nprintf, version_tnf)
+-from .compat import ordereddict, Hashable, MutableSequence  # type: ignore
+-from .compat import MutableMapping  # type: ignore
+-
+-from .comments import *                               # NOQA
+-from .comments import (CommentedMap, CommentedOrderedMap, CommentedSet,
+-                                  CommentedKeySeq, CommentedSeq, TaggedScalar,
+-                                  CommentedKeyMap)
+-from .scalarstring import (SingleQuotedScalarString, DoubleQuotedScalarString,
+-                                      LiteralScalarString, FoldedScalarString,
+-                                      PlainScalarString, ScalarString,)
+-from .scalarint import ScalarInt, BinaryInt, OctalInt, HexInt, HexCapsInt
+-from .scalarfloat import ScalarFloat
+-from .scalarbool import ScalarBoolean
+-from .timestamp import TimeStamp
+-from .util import RegExp
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, List, Set, Generator, Union, Optional  # NOQA
+-
+-
+-__all__ = ['BaseConstructor', 'SafeConstructor', 'Constructor',
+-           'ConstructorError', 'RoundTripConstructor']
+-# fmt: on
+-
+-
+-class ConstructorError(MarkedYAMLError):
+-    pass
+-
+-
+-class DuplicateKeyFutureWarning(MarkedYAMLFutureWarning):
+-    pass
+-
+-
+-class DuplicateKeyError(MarkedYAMLFutureWarning):
+-    pass
+-
+-
+-class BaseConstructor(object):
+-
+-    yaml_constructors = {}  # type: Dict[Any, Any]
+-    yaml_multi_constructors = {}  # type: Dict[Any, Any]
+-
+-    def __init__(self, preserve_quotes=None, loader=None):
+-        # type: (Optional[bool], Any) -> None
+-        self.loader = loader
+-        if self.loader is not None and getattr(self.loader, '_constructor', None) is None:
+-            self.loader._constructor = self
+-        self.loader = loader
+-        self.yaml_base_dict_type = dict
+-        self.yaml_base_list_type = list
+-        self.constructed_objects = {}  # type: Dict[Any, Any]
+-        self.recursive_objects = {}  # type: Dict[Any, Any]
+-        self.state_generators = []  # type: List[Any]
+-        self.deep_construct = False
+-        self._preserve_quotes = preserve_quotes
+-        self.allow_duplicate_keys = version_tnf((0, 15, 1), (0, 16))
+-
+-    @property
+-    def composer(self):
+-        # type: () -> Any
+-        if hasattr(self.loader, 'typ'):
+-            return self.loader.composer
+-        try:
+-            return self.loader._composer
+-        except AttributeError:
+-            sys.stdout.write('slt {}\n'.format(type(self)))
+-            sys.stdout.write('slc {}\n'.format(self.loader._composer))
+-            sys.stdout.write('{}\n'.format(dir(self)))
+-            raise
+-
+-    @property
+-    def resolver(self):
+-        # type: () -> Any
+-        if hasattr(self.loader, 'typ'):
+-            return self.loader.resolver
+-        return self.loader._resolver
+-
+-    def check_data(self):
+-        # type: () -> Any
+-        # If there are more documents available?
+-        return self.composer.check_node()
+-
+-    def get_data(self):
+-        # type: () -> Any
+-        # Construct and return the next document.
+-        if self.composer.check_node():
+-            return self.construct_document(self.composer.get_node())
+-
+-    def get_single_data(self):
+-        # type: () -> Any
+-        # Ensure that the stream contains a single document and construct it.
+-        node = self.composer.get_single_node()
+-        if node is not None:
+-            return self.construct_document(node)
+-        return None
+-
+-    def construct_document(self, node):
+-        # type: (Any) -> Any
+-        data = self.construct_object(node)
+-        while bool(self.state_generators):
+-            state_generators = self.state_generators
+-            self.state_generators = []
+-            for generator in state_generators:
+-                for _dummy in generator:
+-                    pass
+-        self.constructed_objects = {}
+-        self.recursive_objects = {}
+-        self.deep_construct = False
+-        return data
+-
+-    def construct_object(self, node, deep=False):
+-        # type: (Any, bool) -> Any
+-        """deep is True when creating an object/mapping recursively,
+-        in that case want the underlying elements available during construction
+-        """
+-        if node in self.constructed_objects:
+-            return self.constructed_objects[node]
+-        if deep:
+-            old_deep = self.deep_construct
+-            self.deep_construct = True
+-        if node in self.recursive_objects:
+-            return self.recursive_objects[node]
+-            # raise ConstructorError(
+-            #     None, None, 'found unconstructable recursive node', node.start_mark
+-            # )
+-        self.recursive_objects[node] = None
+-        data = self.construct_non_recursive_object(node)
+-
+-        self.constructed_objects[node] = data
+-        del self.recursive_objects[node]
+-        if deep:
+-            self.deep_construct = old_deep
+-        return data
+-
+-    def construct_non_recursive_object(self, node, tag=None):
+-        # type: (Any, Optional[str]) -> Any
+-        constructor = None  # type: Any
+-        tag_suffix = None
+-        if tag is None:
+-            tag = node.tag
+-        if tag in self.yaml_constructors:
+-            constructor = self.yaml_constructors[tag]
+-        else:
+-            for tag_prefix in self.yaml_multi_constructors:
+-                if tag.startswith(tag_prefix):
+-                    tag_suffix = tag[len(tag_prefix) :]
+-                    constructor = self.yaml_multi_constructors[tag_prefix]
+-                    break
+-            else:
+-                if None in self.yaml_multi_constructors:
+-                    tag_suffix = tag
+-                    constructor = self.yaml_multi_constructors[None]
+-                elif None in self.yaml_constructors:
+-                    constructor = self.yaml_constructors[None]
+-                elif isinstance(node, ScalarNode):
+-                    constructor = self.__class__.construct_scalar
+-                elif isinstance(node, SequenceNode):
+-                    constructor = self.__class__.construct_sequence
+-                elif isinstance(node, MappingNode):
+-                    constructor = self.__class__.construct_mapping
+-        if tag_suffix is None:
+-            data = constructor(self, node)
+-        else:
+-            data = constructor(self, tag_suffix, node)
+-        if isinstance(data, types.GeneratorType):
+-            generator = data
+-            data = next(generator)
+-            if self.deep_construct:
+-                for _dummy in generator:
+-                    pass
+-            else:
+-                self.state_generators.append(generator)
+-        return data
+-
+-    def construct_scalar(self, node):
+-        # type: (Any) -> Any
+-        if not isinstance(node, ScalarNode):
+-            raise ConstructorError(
+-                None, None, 'expected a scalar node, but found %s' % node.id, node.start_mark
+-            )
+-        return node.value
+-
+-    def construct_sequence(self, node, deep=False):
+-        # type: (Any, bool) -> Any
+-        """deep is True when creating an object/mapping recursively,
+-        in that case want the underlying elements available during construction
+-        """
+-        if not isinstance(node, SequenceNode):
+-            raise ConstructorError(
+-                None, None, 'expected a sequence node, but found %s' % node.id, node.start_mark
+-            )
+-        return [self.construct_object(child, deep=deep) for child in node.value]
+-
+-    def construct_mapping(self, node, deep=False):
+-        # type: (Any, bool) -> Any
+-        """deep is True when creating an object/mapping recursively,
+-        in that case want the underlying elements available during construction
+-        """
+-        if not isinstance(node, MappingNode):
+-            raise ConstructorError(
+-                None, None, 'expected a mapping node, but found %s' % node.id, node.start_mark
+-            )
+-        total_mapping = self.yaml_base_dict_type()
+-        if getattr(node, 'merge', None) is not None:
+-            todo = [(node.merge, False), (node.value, False)]
+-        else:
+-            todo = [(node.value, True)]
+-        for values, check in todo:
+-            mapping = self.yaml_base_dict_type()  # type: Dict[Any, Any]
+-            for key_node, value_node in values:
+-                # keys can be list -> deep
+-                key = self.construct_object(key_node, deep=True)
+-                # lists are not hashable, but tuples are
+-                if not isinstance(key, Hashable):
+-                    if isinstance(key, list):
+-                        key = tuple(key)
+-                if PY2:
+-                    try:
+-                        hash(key)
+-                    except TypeError as exc:
+-                        raise ConstructorError(
+-                            'while constructing a mapping',
+-                            node.start_mark,
+-                            'found unacceptable key (%s)' % exc,
+-                            key_node.start_mark,
+-                        )
+-                else:
+-                    if not isinstance(key, Hashable):
+-                        raise ConstructorError(
+-                            'while constructing a mapping',
+-                            node.start_mark,
+-                            'found unhashable key',
+-                            key_node.start_mark,
+-                        )
+-
+-                value = self.construct_object(value_node, deep=deep)
+-                if check:
+-                    if self.check_mapping_key(node, key_node, mapping, key, value):
+-                        mapping[key] = value
+-                else:
+-                    mapping[key] = value
+-            total_mapping.update(mapping)
+-        return total_mapping
+-
+-    def check_mapping_key(self, node, key_node, mapping, key, value):
+-        # type: (Any, Any, Any, Any, Any) -> bool
+-        """return True if key is unique"""
+-        if key in mapping:
+-            if not self.allow_duplicate_keys:
+-                mk = mapping.get(key)
+-                if PY2:
+-                    if isinstance(key, unicode):
+-                        key = key.encode('utf-8')
+-                    if isinstance(value, unicode):
+-                        value = value.encode('utf-8')
+-                    if isinstance(mk, unicode):
+-                        mk = mk.encode('utf-8')
+-                args = [
+-                    'while constructing a mapping',
+-                    node.start_mark,
+-                    'found duplicate key "{}" with value "{}" '
+-                    '(original value: "{}")'.format(key, value, mk),
+-                    key_node.start_mark,
+-                    """
+-                    To suppress this check see:
+-                        http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys
+-                    """,
+-                    """\
+-                    Duplicate keys will become an error in future releases, and are errors
+-                    by default when using the new API.
+-                    """,
+-                ]
+-                if self.allow_duplicate_keys is None:
+-                    warnings.warn(DuplicateKeyFutureWarning(*args))
+-                else:
+-                    raise DuplicateKeyError(*args)
+-            return False
+-        return True
+-
+-    def check_set_key(self, node, key_node, setting, key):
+-        # type: (Any, Any, Any, Any, Any) -> None
+-        if key in setting:
+-            if not self.allow_duplicate_keys:
+-                if PY2:
+-                    if isinstance(key, unicode):
+-                        key = key.encode('utf-8')
+-                args = [
+-                    'while constructing a set',
+-                    node.start_mark,
+-                    'found duplicate key "{}"'.format(key),
+-                    key_node.start_mark,
+-                    """
+-                    To suppress this check see:
+-                        http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys
+-                    """,
+-                    """\
+-                    Duplicate keys will become an error in future releases, and are errors
+-                    by default when using the new API.
+-                    """,
+-                ]
+-                if self.allow_duplicate_keys is None:
+-                    warnings.warn(DuplicateKeyFutureWarning(*args))
+-                else:
+-                    raise DuplicateKeyError(*args)
+-
+-    def construct_pairs(self, node, deep=False):
+-        # type: (Any, bool) -> Any
+-        if not isinstance(node, MappingNode):
+-            raise ConstructorError(
+-                None, None, 'expected a mapping node, but found %s' % node.id, node.start_mark
+-            )
+-        pairs = []
+-        for key_node, value_node in node.value:
+-            key = self.construct_object(key_node, deep=deep)
+-            value = self.construct_object(value_node, deep=deep)
+-            pairs.append((key, value))
+-        return pairs
+-
+-    @classmethod
+-    def add_constructor(cls, tag, constructor):
+-        # type: (Any, Any) -> None
+-        if 'yaml_constructors' not in cls.__dict__:
+-            cls.yaml_constructors = cls.yaml_constructors.copy()
+-        cls.yaml_constructors[tag] = constructor
+-
+-    @classmethod
+-    def add_multi_constructor(cls, tag_prefix, multi_constructor):
+-        # type: (Any, Any) -> None
+-        if 'yaml_multi_constructors' not in cls.__dict__:
+-            cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy()
+-        cls.yaml_multi_constructors[tag_prefix] = multi_constructor
+-
+-
+-class SafeConstructor(BaseConstructor):
+-    def construct_scalar(self, node):
+-        # type: (Any) -> Any
+-        if isinstance(node, MappingNode):
+-            for key_node, value_node in node.value:
+-                if key_node.tag == u'tag:yaml.org,2002:value':
+-                    return self.construct_scalar(value_node)
+-        return BaseConstructor.construct_scalar(self, node)
+-
+-    def flatten_mapping(self, node):
+-        # type: (Any) -> Any
+-        """
+-        This implements the merge key feature http://yaml.org/type/merge.html
+-        by inserting keys from the merge dict/list of dicts if not yet
+-        available in this node
+-        """
+-        merge = []  # type: List[Any]
+-        index = 0
+-        while index < len(node.value):
+-            key_node, value_node = node.value[index]
+-            if key_node.tag == u'tag:yaml.org,2002:merge':
+-                if merge:  # double << key
+-                    if self.allow_duplicate_keys:
+-                        del node.value[index]
+-                        index += 1
+-                        continue
+-                    args = [
+-                        'while constructing a mapping',
+-                        node.start_mark,
+-                        'found duplicate key "{}"'.format(key_node.value),
+-                        key_node.start_mark,
+-                        """
+-                        To suppress this check see:
+-                           http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys
+-                        """,
+-                        """\
+-                        Duplicate keys will become an error in future releases, and are errors
+-                        by default when using the new API.
+-                        """,
+-                    ]
+-                    if self.allow_duplicate_keys is None:
+-                        warnings.warn(DuplicateKeyFutureWarning(*args))
+-                    else:
+-                        raise DuplicateKeyError(*args)
+-                del node.value[index]
+-                if isinstance(value_node, MappingNode):
+-                    self.flatten_mapping(value_node)
+-                    merge.extend(value_node.value)
+-                elif isinstance(value_node, SequenceNode):
+-                    submerge = []
+-                    for subnode in value_node.value:
+-                        if not isinstance(subnode, MappingNode):
+-                            raise ConstructorError(
+-                                'while constructing a mapping',
+-                                node.start_mark,
+-                                'expected a mapping for merging, but found %s' % subnode.id,
+-                                subnode.start_mark,
+-                            )
+-                        self.flatten_mapping(subnode)
+-                        submerge.append(subnode.value)
+-                    submerge.reverse()
+-                    for value in submerge:
+-                        merge.extend(value)
+-                else:
+-                    raise ConstructorError(
+-                        'while constructing a mapping',
+-                        node.start_mark,
+-                        'expected a mapping or list of mappings for merging, '
+-                        'but found %s' % value_node.id,
+-                        value_node.start_mark,
+-                    )
+-            elif key_node.tag == u'tag:yaml.org,2002:value':
+-                key_node.tag = u'tag:yaml.org,2002:str'
+-                index += 1
+-            else:
+-                index += 1
+-        if bool(merge):
+-            node.merge = merge  # separate merge keys to be able to update without duplicate
+-            node.value = merge + node.value
+-
+-    def construct_mapping(self, node, deep=False):
+-        # type: (Any, bool) -> Any
+-        """deep is True when creating an object/mapping recursively,
+-        in that case want the underlying elements available during construction
+-        """
+-        if isinstance(node, MappingNode):
+-            self.flatten_mapping(node)
+-        return BaseConstructor.construct_mapping(self, node, deep=deep)
+-
+-    def construct_yaml_null(self, node):
+-        # type: (Any) -> Any
+-        self.construct_scalar(node)
+-        return None
+-
+-    # YAML 1.2 spec doesn't mention yes/no etc any more, 1.1 does
+-    bool_values = {
+-        u'yes': True,
+-        u'no': False,
+-        u'y': True,
+-        u'n': False,
+-        u'true': True,
+-        u'false': False,
+-        u'on': True,
+-        u'off': False,
+-    }
+-
+-    def construct_yaml_bool(self, node):
+-        # type: (Any) -> bool
+-        value = self.construct_scalar(node)
+-        return self.bool_values[value.lower()]
+-
+-    def construct_yaml_int(self, node):
+-        # type: (Any) -> int
+-        value_s = to_str(self.construct_scalar(node))
+-        value_s = value_s.replace('_', "")
+-        sign = +1
+-        if value_s[0] == '-':
+-            sign = -1
+-        if value_s[0] in '+-':
+-            value_s = value_s[1:]
+-        if value_s == '0':
+-            return 0
+-        elif value_s.startswith('0b'):
+-            return sign * int(value_s[2:], 2)
+-        elif value_s.startswith('0x'):
+-            return sign * int(value_s[2:], 16)
+-        elif value_s.startswith('0o'):
+-            return sign * int(value_s[2:], 8)
+-        elif self.resolver.processing_version == (1, 1) and value_s[0] == '0':
+-            return sign * int(value_s, 8)
+-        elif self.resolver.processing_version == (1, 1) and ':' in value_s:
+-            digits = [int(part) for part in value_s.split(':')]
+-            digits.reverse()
+-            base = 1
+-            value = 0
+-            for digit in digits:
+-                value += digit * base
+-                base *= 60
+-            return sign * value
+-        else:
+-            return sign * int(value_s)
+-
+-    inf_value = 1e300
+-    while inf_value != inf_value * inf_value:
+-        inf_value *= inf_value
+-    nan_value = -inf_value / inf_value  # Trying to make a quiet NaN (like C99).
+-
+-    def construct_yaml_float(self, node):
+-        # type: (Any) -> float
+-        value_so = to_str(self.construct_scalar(node))
+-        value_s = value_so.replace('_', "").lower()
+-        sign = +1
+-        if value_s[0] == '-':
+-            sign = -1
+-        if value_s[0] in '+-':
+-            value_s = value_s[1:]
+-        if value_s == '.inf':
+-            return sign * self.inf_value
+-        elif value_s == '.nan':
+-            return self.nan_value
+-        elif self.resolver.processing_version != (1, 2) and ':' in value_s:
+-            digits = [float(part) for part in value_s.split(':')]
+-            digits.reverse()
+-            base = 1
+-            value = 0.0
+-            for digit in digits:
+-                value += digit * base
+-                base *= 60
+-            return sign * value
+-        else:
+-            if self.resolver.processing_version != (1, 2) and 'e' in value_s:
+-                # value_s is lower case independent of input
+-                mantissa, exponent = value_s.split('e')
+-                if '.' not in mantissa:
+-                    warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so))
+-            return sign * float(value_s)
+-
+-    if PY3:
+-
+-        def construct_yaml_binary(self, node):
+-            # type: (Any) -> Any
+-            try:
+-                value = self.construct_scalar(node).encode('ascii')
+-            except UnicodeEncodeError as exc:
+-                raise ConstructorError(
+-                    None,
+-                    None,
+-                    'failed to convert base64 data into ascii: %s' % exc,
+-                    node.start_mark,
+-                )
+-            try:
+-                if hasattr(base64, 'decodebytes'):
+-                    return base64.decodebytes(value)
+-                else:
+-                    return base64.decodestring(value)
+-            except binascii.Error as exc:
+-                raise ConstructorError(
+-                    None, None, 'failed to decode base64 data: %s' % exc, node.start_mark
+-                )
+-
+-    else:
+-
+-        def construct_yaml_binary(self, node):
+-            # type: (Any) -> Any
+-            value = self.construct_scalar(node)
+-            try:
+-                return to_str(value).decode('base64')
+-            except (binascii.Error, UnicodeEncodeError) as exc:
+-                raise ConstructorError(
+-                    None, None, 'failed to decode base64 data: %s' % exc, node.start_mark
+-                )
+-
+-    timestamp_regexp = RegExp(
+-        u"""^(?P<year>[0-9][0-9][0-9][0-9])
+-          -(?P<month>[0-9][0-9]?)
+-          -(?P<day>[0-9][0-9]?)
+-          (?:((?P<t>[Tt])|[ \\t]+)   # explictly not retaining extra spaces
+-          (?P<hour>[0-9][0-9]?)
+-          :(?P<minute>[0-9][0-9])
+-          :(?P<second>[0-9][0-9])
+-          (?:\\.(?P<fraction>[0-9]*))?
+-          (?:[ \\t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)
+-          (?::(?P<tz_minute>[0-9][0-9]))?))?)?$""",
+-        re.X,
+-    )
+-
+-    def construct_yaml_timestamp(self, node, values=None):
+-        # type: (Any, Any) -> Any
+-        if values is None:
+-            try:
+-                match = self.timestamp_regexp.match(node.value)
+-            except TypeError:
+-                match = None
+-            if match is None:
+-                raise ConstructorError(
+-                    None,
+-                    None,
+-                    'failed to construct timestamp from "{}"'.format(node.value),
+-                    node.start_mark,
+-                )
+-            values = match.groupdict()
+-        year = int(values['year'])
+-        month = int(values['month'])
+-        day = int(values['day'])
+-        if not values['hour']:
+-            return datetime.date(year, month, day)
+-        hour = int(values['hour'])
+-        minute = int(values['minute'])
+-        second = int(values['second'])
+-        fraction = 0
+-        if values['fraction']:
+-            fraction_s = values['fraction'][:6]
+-            while len(fraction_s) < 6:
+-                fraction_s += '0'
+-            fraction = int(fraction_s)
+-            if len(values['fraction']) > 6 and int(values['fraction'][6]) > 4:
+-                fraction += 1
+-        delta = None
+-        if values['tz_sign']:
+-            tz_hour = int(values['tz_hour'])
+-            minutes = values['tz_minute']
+-            tz_minute = int(minutes) if minutes else 0
+-            delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
+-            if values['tz_sign'] == '-':
+-                delta = -delta
+-        # should do something else instead (or hook this up to the preceding if statement
+-        # in reverse
+-        #  if delta is None:
+-        #      return datetime.datetime(year, month, day, hour, minute, second, fraction)
+-        #  return datetime.datetime(year, month, day, hour, minute, second, fraction,
+-        #                           datetime.timezone.utc)
+-        # the above is not good enough though, should provide tzinfo. In Python3 that is easily
+-        # doable drop that kind of support for Python2 as it has not native tzinfo
+-        data = datetime.datetime(year, month, day, hour, minute, second, fraction)
+-        if delta:
+-            data -= delta
+-        return data
+-
+-    def construct_yaml_omap(self, node):
+-        # type: (Any) -> Any
+-        # Note: we do now check for duplicate keys
+-        omap = ordereddict()
+-        yield omap
+-        if not isinstance(node, SequenceNode):
+-            raise ConstructorError(
+-                'while constructing an ordered map',
+-                node.start_mark,
+-                'expected a sequence, but found %s' % node.id,
+-                node.start_mark,
+-            )
+-        for subnode in node.value:
+-            if not isinstance(subnode, MappingNode):
+-                raise ConstructorError(
+-                    'while constructing an ordered map',
+-                    node.start_mark,
+-                    'expected a mapping of length 1, but found %s' % subnode.id,
+-                    subnode.start_mark,
+-                )
+-            if len(subnode.value) != 1:
+-                raise ConstructorError(
+-                    'while constructing an ordered map',
+-                    node.start_mark,
+-                    'expected a single mapping item, but found %d items' % len(subnode.value),
+-                    subnode.start_mark,
+-                )
+-            key_node, value_node = subnode.value[0]
+-            key = self.construct_object(key_node)
+-            assert key not in omap
+-            value = self.construct_object(value_node)
+-            omap[key] = value
+-
+-    def construct_yaml_pairs(self, node):
+-        # type: (Any) -> Any
+-        # Note: the same code as `construct_yaml_omap`.
+-        pairs = []  # type: List[Any]
+-        yield pairs
+-        if not isinstance(node, SequenceNode):
+-            raise ConstructorError(
+-                'while constructing pairs',
+-                node.start_mark,
+-                'expected a sequence, but found %s' % node.id,
+-                node.start_mark,
+-            )
+-        for subnode in node.value:
+-            if not isinstance(subnode, MappingNode):
+-                raise ConstructorError(
+-                    'while constructing pairs',
+-                    node.start_mark,
+-                    'expected a mapping of length 1, but found %s' % subnode.id,
+-                    subnode.start_mark,
+-                )
+-            if len(subnode.value) != 1:
+-                raise ConstructorError(
+-                    'while constructing pairs',
+-                    node.start_mark,
+-                    'expected a single mapping item, but found %d items' % len(subnode.value),
+-                    subnode.start_mark,
+-                )
+-            key_node, value_node = subnode.value[0]
+-            key = self.construct_object(key_node)
+-            value = self.construct_object(value_node)
+-            pairs.append((key, value))
+-
+-    def construct_yaml_set(self, node):
+-        # type: (Any) -> Any
+-        data = set()  # type: Set[Any]
+-        yield data
+-        value = self.construct_mapping(node)
+-        data.update(value)
+-
+-    def construct_yaml_str(self, node):
+-        # type: (Any) -> Any
+-        value = self.construct_scalar(node)
+-        if PY3:
+-            return value
+-        try:
+-            return value.encode('ascii')
+-        except UnicodeEncodeError:
+-            return value
+-
+-    def construct_yaml_seq(self, node):
+-        # type: (Any) -> Any
+-        data = self.yaml_base_list_type()  # type: List[Any]
+-        yield data
+-        data.extend(self.construct_sequence(node))
+-
+-    def construct_yaml_map(self, node):
+-        # type: (Any) -> Any
+-        data = self.yaml_base_dict_type()  # type: Dict[Any, Any]
+-        yield data
+-        value = self.construct_mapping(node)
+-        data.update(value)
+-
+-    def construct_yaml_object(self, node, cls):
+-        # type: (Any, Any) -> Any
+-        data = cls.__new__(cls)
+-        yield data
+-        if hasattr(data, '__setstate__'):
+-            state = self.construct_mapping(node, deep=True)
+-            data.__setstate__(state)
+-        else:
+-            state = self.construct_mapping(node)
+-            data.__dict__.update(state)
+-
+-    def construct_undefined(self, node):
+-        # type: (Any) -> None
+-        raise ConstructorError(
+-            None,
+-            None,
+-            'could not determine a constructor for the tag %r' % utf8(node.tag),
+-            node.start_mark,
+-        )
+-
+-
+-SafeConstructor.add_constructor(u'tag:yaml.org,2002:null', SafeConstructor.construct_yaml_null)
+-
+-SafeConstructor.add_constructor(u'tag:yaml.org,2002:bool', SafeConstructor.construct_yaml_bool)
+-
+-SafeConstructor.add_constructor(u'tag:yaml.org,2002:int', SafeConstructor.construct_yaml_int)
+-
+-SafeConstructor.add_constructor(
+-    u'tag:yaml.org,2002:float', SafeConstructor.construct_yaml_float
+-)
+-
+-SafeConstructor.add_constructor(
+-    u'tag:yaml.org,2002:binary', SafeConstructor.construct_yaml_binary
+-)
+-
+-SafeConstructor.add_constructor(
+-    u'tag:yaml.org,2002:timestamp', SafeConstructor.construct_yaml_timestamp
+-)
+-
+-SafeConstructor.add_constructor(u'tag:yaml.org,2002:omap', SafeConstructor.construct_yaml_omap)
+-
+-SafeConstructor.add_constructor(
+-    u'tag:yaml.org,2002:pairs', SafeConstructor.construct_yaml_pairs
+-)
+-
+-SafeConstructor.add_constructor(u'tag:yaml.org,2002:set', SafeConstructor.construct_yaml_set)
+-
+-SafeConstructor.add_constructor(u'tag:yaml.org,2002:str', SafeConstructor.construct_yaml_str)
+-
+-SafeConstructor.add_constructor(u'tag:yaml.org,2002:seq', SafeConstructor.construct_yaml_seq)
+-
+-SafeConstructor.add_constructor(u'tag:yaml.org,2002:map', SafeConstructor.construct_yaml_map)
+-
+-SafeConstructor.add_constructor(None, SafeConstructor.construct_undefined)
+-
+-if PY2:
+-
+-    class classobj:
+-        pass
+-
+-
+-class Constructor(SafeConstructor):
+-    def construct_python_str(self, node):
+-        # type: (Any) -> Any
+-        return utf8(self.construct_scalar(node))
+-
+-    def construct_python_unicode(self, node):
+-        # type: (Any) -> Any
+-        return self.construct_scalar(node)
+-
+-    if PY3:
+-
+-        def construct_python_bytes(self, node):
+-            # type: (Any) -> Any
+-            try:
+-                value = self.construct_scalar(node).encode('ascii')
+-            except UnicodeEncodeError as exc:
+-                raise ConstructorError(
+-                    None,
+-                    None,
+-                    'failed to convert base64 data into ascii: %s' % exc,
+-                    node.start_mark,
+-                )
+-            try:
+-                if hasattr(base64, 'decodebytes'):
+-                    return base64.decodebytes(value)
+-                else:
+-                    return base64.decodestring(value)
+-            except binascii.Error as exc:
+-                raise ConstructorError(
+-                    None, None, 'failed to decode base64 data: %s' % exc, node.start_mark
+-                )
+-
+-    def construct_python_long(self, node):
+-        # type: (Any) -> int
+-        val = self.construct_yaml_int(node)
+-        if PY3:
+-            return val
+-        return int(val)
+-
+-    def construct_python_complex(self, node):
+-        # type: (Any) -> Any
+-        return complex(self.construct_scalar(node))
+-
+-    def construct_python_tuple(self, node):
+-        # type: (Any) -> Any
+-        return tuple(self.construct_sequence(node))
+-
+-    def find_python_module(self, name, mark):
+-        # type: (Any, Any) -> Any
+-        if not name:
+-            raise ConstructorError(
+-                'while constructing a Python module',
+-                mark,
+-                'expected non-empty name appended to the tag',
+-                mark,
+-            )
+-        try:
+-            __import__(name)
+-        except ImportError as exc:
+-            raise ConstructorError(
+-                'while constructing a Python module',
+-                mark,
+-                'cannot find module %r (%s)' % (utf8(name), exc),
+-                mark,
+-            )
+-        return sys.modules[name]
+-
+-    def find_python_name(self, name, mark):
+-        # type: (Any, Any) -> Any
+-        if not name:
+-            raise ConstructorError(
+-                'while constructing a Python object',
+-                mark,
+-                'expected non-empty name appended to the tag',
+-                mark,
+-            )
+-        if u'.' in name:
+-            lname = name.split('.')
+-            lmodule_name = lname
+-            lobject_name = []  # type: List[Any]
+-            while len(lmodule_name) > 1:
+-                lobject_name.insert(0, lmodule_name.pop())
+-                module_name = '.'.join(lmodule_name)
+-                try:
+-                    __import__(module_name)
+-                    # object_name = '.'.join(object_name)
+-                    break
+-                except ImportError:
+-                    continue
+-        else:
+-            module_name = builtins_module
+-            lobject_name = [name]
+-        try:
+-            __import__(module_name)
+-        except ImportError as exc:
+-            raise ConstructorError(
+-                'while constructing a Python object',
+-                mark,
+-                'cannot find module %r (%s)' % (utf8(module_name), exc),
+-                mark,
+-            )
+-        module = sys.modules[module_name]
+-        object_name = '.'.join(lobject_name)
+-        obj = module
+-        while lobject_name:
+-            if not hasattr(obj, lobject_name[0]):
+-
+-                raise ConstructorError(
+-                    'while constructing a Python object',
+-                    mark,
+-                    'cannot find %r in the module %r' % (utf8(object_name), module.__name__),
+-                    mark,
+-                )
+-            obj = getattr(obj, lobject_name.pop(0))
+-        return obj
+-
+-    def construct_python_name(self, suffix, node):
+-        # type: (Any, Any) -> Any
+-        value = self.construct_scalar(node)
+-        if value:
+-            raise ConstructorError(
+-                'while constructing a Python name',
+-                node.start_mark,
+-                'expected the empty value, but found %r' % utf8(value),
+-                node.start_mark,
+-            )
+-        return self.find_python_name(suffix, node.start_mark)
+-
+-    def construct_python_module(self, suffix, node):
+-        # type: (Any, Any) -> Any
+-        value = self.construct_scalar(node)
+-        if value:
+-            raise ConstructorError(
+-                'while constructing a Python module',
+-                node.start_mark,
+-                'expected the empty value, but found %r' % utf8(value),
+-                node.start_mark,
+-            )
+-        return self.find_python_module(suffix, node.start_mark)
+-
+-    def make_python_instance(self, suffix, node, args=None, kwds=None, newobj=False):
+-        # type: (Any, Any, Any, Any, bool) -> Any
+-        if not args:
+-            args = []
+-        if not kwds:
+-            kwds = {}
+-        cls = self.find_python_name(suffix, node.start_mark)
+-        if PY3:
+-            if newobj and isinstance(cls, type):
+-                return cls.__new__(cls, *args, **kwds)
+-            else:
+-                return cls(*args, **kwds)
+-        else:
+-            if newobj and isinstance(cls, type(classobj)) and not args and not kwds:
+-                instance = classobj()
+-                instance.__class__ = cls
+-                return instance
+-            elif newobj and isinstance(cls, type):
+-                return cls.__new__(cls, *args, **kwds)
+-            else:
+-                return cls(*args, **kwds)
+-
+-    def set_python_instance_state(self, instance, state):
+-        # type: (Any, Any) -> None
+-        if hasattr(instance, '__setstate__'):
+-            instance.__setstate__(state)
+-        else:
+-            slotstate = {}  # type: Dict[Any, Any]
+-            if isinstance(state, tuple) and len(state) == 2:
+-                state, slotstate = state
+-            if hasattr(instance, '__dict__'):
+-                instance.__dict__.update(state)
+-            elif state:
+-                slotstate.update(state)
+-            for key, value in slotstate.items():
+-                setattr(instance, key, value)
+-
+-    def construct_python_object(self, suffix, node):
+-        # type: (Any, Any) -> Any
+-        # Format:
+-        #   !!python/object:module.name { ... state ... }
+-        instance = self.make_python_instance(suffix, node, newobj=True)
+-        self.recursive_objects[node] = instance
+-        yield instance
+-        deep = hasattr(instance, '__setstate__')
+-        state = self.construct_mapping(node, deep=deep)
+-        self.set_python_instance_state(instance, state)
+-
+-    def construct_python_object_apply(self, suffix, node, newobj=False):
+-        # type: (Any, Any, bool) -> Any
+-        # Format:
+-        #   !!python/object/apply       # (or !!python/object/new)
+-        #   args: [ ... arguments ... ]
+-        #   kwds: { ... keywords ... }
+-        #   state: ... state ...
+-        #   listitems: [ ... listitems ... ]
+-        #   dictitems: { ... dictitems ... }
+-        # or short format:
+-        #   !!python/object/apply [ ... arguments ... ]
+-        # The difference between !!python/object/apply and !!python/object/new
+-        # is how an object is created, check make_python_instance for details.
+-        if isinstance(node, SequenceNode):
+-            args = self.construct_sequence(node, deep=True)
+-            kwds = {}  # type: Dict[Any, Any]
+-            state = {}  # type: Dict[Any, Any]
+-            listitems = []  # type: List[Any]
+-            dictitems = {}  # type: Dict[Any, Any]
+-        else:
+-            value = self.construct_mapping(node, deep=True)
+-            args = value.get('args', [])
+-            kwds = value.get('kwds', {})
+-            state = value.get('state', {})
+-            listitems = value.get('listitems', [])
+-            dictitems = value.get('dictitems', {})
+-        instance = self.make_python_instance(suffix, node, args, kwds, newobj)
+-        if bool(state):
+-            self.set_python_instance_state(instance, state)
+-        if bool(listitems):
+-            instance.extend(listitems)
+-        if bool(dictitems):
+-            for key in dictitems:
+-                instance[key] = dictitems[key]
+-        return instance
+-
+-    def construct_python_object_new(self, suffix, node):
+-        # type: (Any, Any) -> Any
+-        return self.construct_python_object_apply(suffix, node, newobj=True)
+-
+-
+-Constructor.add_constructor(u'tag:yaml.org,2002:python/none', Constructor.construct_yaml_null)
+-
+-Constructor.add_constructor(u'tag:yaml.org,2002:python/bool', Constructor.construct_yaml_bool)
+-
+-Constructor.add_constructor(u'tag:yaml.org,2002:python/str', Constructor.construct_python_str)
+-
+-Constructor.add_constructor(
+-    u'tag:yaml.org,2002:python/unicode', Constructor.construct_python_unicode
+-)
+-
+-if PY3:
+-    Constructor.add_constructor(
+-        u'tag:yaml.org,2002:python/bytes', Constructor.construct_python_bytes
+-    )
+-
+-Constructor.add_constructor(u'tag:yaml.org,2002:python/int', Constructor.construct_yaml_int)
+-
+-Constructor.add_constructor(
+-    u'tag:yaml.org,2002:python/long', Constructor.construct_python_long
+-)
+-
+-Constructor.add_constructor(
+-    u'tag:yaml.org,2002:python/float', Constructor.construct_yaml_float
+-)
+-
+-Constructor.add_constructor(
+-    u'tag:yaml.org,2002:python/complex', Constructor.construct_python_complex
+-)
+-
+-Constructor.add_constructor(u'tag:yaml.org,2002:python/list', Constructor.construct_yaml_seq)
+-
+-Constructor.add_constructor(
+-    u'tag:yaml.org,2002:python/tuple', Constructor.construct_python_tuple
+-)
+-
+-Constructor.add_constructor(u'tag:yaml.org,2002:python/dict', Constructor.construct_yaml_map)
+-
+-Constructor.add_multi_constructor(
+-    u'tag:yaml.org,2002:python/name:', Constructor.construct_python_name
+-)
+-
+-Constructor.add_multi_constructor(
+-    u'tag:yaml.org,2002:python/module:', Constructor.construct_python_module
+-)
+-
+-Constructor.add_multi_constructor(
+-    u'tag:yaml.org,2002:python/object:', Constructor.construct_python_object
+-)
+-
+-Constructor.add_multi_constructor(
+-    u'tag:yaml.org,2002:python/object/apply:', Constructor.construct_python_object_apply
+-)
+-
+-Constructor.add_multi_constructor(
+-    u'tag:yaml.org,2002:python/object/new:', Constructor.construct_python_object_new
+-)
+-
+-
+-class RoundTripConstructor(SafeConstructor):
+-    """need to store the comments on the node itself,
+-    as well as on the items
+-    """
+-
+-    def construct_scalar(self, node):
+-        # type: (Any) -> Any
+-        if not isinstance(node, ScalarNode):
+-            raise ConstructorError(
+-                None, None, 'expected a scalar node, but found %s' % node.id, node.start_mark
+-            )
+-
+-        if node.style == '|' and isinstance(node.value, text_type):
+-            lss = LiteralScalarString(node.value, anchor=node.anchor)
+-            if node.comment and node.comment[1]:
+-                lss.comment = node.comment[1][0]  # type: ignore
+-            return lss
+-        if node.style == '>' and isinstance(node.value, text_type):
+-            fold_positions = []  # type: List[int]
+-            idx = -1
+-            while True:
+-                idx = node.value.find('\a', idx + 1)
+-                if idx < 0:
+-                    break
+-                fold_positions.append(idx - len(fold_positions))
+-            fss = FoldedScalarString(node.value.replace('\a', ''), anchor=node.anchor)
+-            if node.comment and node.comment[1]:
+-                fss.comment = node.comment[1][0]  # type: ignore
+-            if fold_positions:
+-                fss.fold_pos = fold_positions  # type: ignore
+-            return fss
+-        elif bool(self._preserve_quotes) and isinstance(node.value, text_type):
+-            if node.style == "'":
+-                return SingleQuotedScalarString(node.value, anchor=node.anchor)
+-            if node.style == '"':
+-                return DoubleQuotedScalarString(node.value, anchor=node.anchor)
+-        if node.anchor:
+-            return PlainScalarString(node.value, anchor=node.anchor)
+-        return node.value
+-
+-    def construct_yaml_int(self, node):
+-        # type: (Any) -> Any
+-        width = None  # type: Any
+-        value_su = to_str(self.construct_scalar(node))
+-        try:
+-            sx = value_su.rstrip('_')
+-            underscore = [len(sx) - sx.rindex('_') - 1, False, False]  # type: Any
+-        except ValueError:
+-            underscore = None
+-        except IndexError:
+-            underscore = None
+-        value_s = value_su.replace('_', "")
+-        sign = +1
+-        if value_s[0] == '-':
+-            sign = -1
+-        if value_s[0] in '+-':
+-            value_s = value_s[1:]
+-        if value_s == '0':
+-            return 0
+-        elif value_s.startswith('0b'):
+-            if self.resolver.processing_version > (1, 1) and value_s[2] == '0':
+-                width = len(value_s[2:])
+-            if underscore is not None:
+-                underscore[1] = value_su[2] == '_'
+-                underscore[2] = len(value_su[2:]) > 1 and value_su[-1] == '_'
+-            return BinaryInt(
+-                sign * int(value_s[2:], 2),
+-                width=width,
+-                underscore=underscore,
+-                anchor=node.anchor,
+-            )
+-        elif value_s.startswith('0x'):
+-            # default to lower-case if no a-fA-F in string
+-            if self.resolver.processing_version > (1, 1) and value_s[2] == '0':
+-                width = len(value_s[2:])
+-            hex_fun = HexInt  # type: Any
+-            for ch in value_s[2:]:
+-                if ch in 'ABCDEF':  # first non-digit is capital
+-                    hex_fun = HexCapsInt
+-                    break
+-                if ch in 'abcdef':
+-                    break
+-            if underscore is not None:
+-                underscore[1] = value_su[2] == '_'
+-                underscore[2] = len(value_su[2:]) > 1 and value_su[-1] == '_'
+-            return hex_fun(
+-                sign * int(value_s[2:], 16),
+-                width=width,
+-                underscore=underscore,
+-                anchor=node.anchor,
+-            )
+-        elif value_s.startswith('0o'):
+-            if self.resolver.processing_version > (1, 1) and value_s[2] == '0':
+-                width = len(value_s[2:])
+-            if underscore is not None:
+-                underscore[1] = value_su[2] == '_'
+-                underscore[2] = len(value_su[2:]) > 1 and value_su[-1] == '_'
+-            return OctalInt(
+-                sign * int(value_s[2:], 8),
+-                width=width,
+-                underscore=underscore,
+-                anchor=node.anchor,
+-            )
+-        elif self.resolver.processing_version != (1, 2) and value_s[0] == '0':
+-            return sign * int(value_s, 8)
+-        elif self.resolver.processing_version != (1, 2) and ':' in value_s:
+-            digits = [int(part) for part in value_s.split(':')]
+-            digits.reverse()
+-            base = 1
+-            value = 0
+-            for digit in digits:
+-                value += digit * base
+-                base *= 60
+-            return sign * value
+-        elif self.resolver.processing_version > (1, 1) and value_s[0] == '0':
+-            # not an octal, an integer with leading zero(s)
+-            if underscore is not None:
+-                # cannot have a leading underscore
+-                underscore[2] = len(value_su) > 1 and value_su[-1] == '_'
+-            return ScalarInt(sign * int(value_s), width=len(value_s), underscore=underscore)
+-        elif underscore:
+-            # cannot have a leading underscore
+-            underscore[2] = len(value_su) > 1 and value_su[-1] == '_'
+-            return ScalarInt(
+-                sign * int(value_s), width=None, underscore=underscore, anchor=node.anchor
+-            )
+-        elif node.anchor:
+-            return ScalarInt(sign * int(value_s), width=None, anchor=node.anchor)
+-        else:
+-            return sign * int(value_s)
+-
+-    def construct_yaml_float(self, node):
+-        # type: (Any) -> Any
+-        def leading_zeros(v):
+-            # type: (Any) -> int
+-            lead0 = 0
+-            idx = 0
+-            while idx < len(v) and v[idx] in '0.':
+-                if v[idx] == '0':
+-                    lead0 += 1
+-                idx += 1
+-            return lead0
+-
+-        # underscore = None
+-        m_sign = False  # type: Any
+-        value_so = to_str(self.construct_scalar(node))
+-        value_s = value_so.replace('_', "").lower()
+-        sign = +1
+-        if value_s[0] == '-':
+-            sign = -1
+-        if value_s[0] in '+-':
+-            m_sign = value_s[0]
+-            value_s = value_s[1:]
+-        if value_s == '.inf':
+-            return sign * self.inf_value
+-        if value_s == '.nan':
+-            return self.nan_value
+-        if self.resolver.processing_version != (1, 2) and ':' in value_s:
+-            digits = [float(part) for part in value_s.split(':')]
+-            digits.reverse()
+-            base = 1
+-            value = 0.0
+-            for digit in digits:
+-                value += digit * base
+-                base *= 60
+-            return sign * value
+-        if 'e' in value_s:
+-            try:
+-                mantissa, exponent = value_so.split('e')
+-                exp = 'e'
+-            except ValueError:
+-                mantissa, exponent = value_so.split('E')
+-                exp = 'E'
+-            if self.resolver.processing_version != (1, 2):
+-                # value_s is lower case independent of input
+-                if '.' not in mantissa:
+-                    warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so))
+-            lead0 = leading_zeros(mantissa)
+-            width = len(mantissa)
+-            prec = mantissa.find('.')
+-            if m_sign:
+-                width -= 1
+-            e_width = len(exponent)
+-            e_sign = exponent[0] in '+-'
+-            # nprint('sf', width, prec, m_sign, exp, e_width, e_sign)
+-            return ScalarFloat(
+-                sign * float(value_s),
+-                width=width,
+-                prec=prec,
+-                m_sign=m_sign,
+-                m_lead0=lead0,
+-                exp=exp,
+-                e_width=e_width,
+-                e_sign=e_sign,
+-                anchor=node.anchor,
+-            )
+-        width = len(value_so)
+-        prec = value_so.index('.')  # you can use index, this would not be float without dot
+-        lead0 = leading_zeros(value_so)
+-        return ScalarFloat(
+-            sign * float(value_s),
+-            width=width,
+-            prec=prec,
+-            m_sign=m_sign,
+-            m_lead0=lead0,
+-            anchor=node.anchor,
+-        )
+-
+-    def construct_yaml_str(self, node):
+-        # type: (Any) -> Any
+-        value = self.construct_scalar(node)
+-        if isinstance(value, ScalarString):
+-            return value
+-        if PY3:
+-            return value
+-        try:
+-            return value.encode('ascii')
+-        except AttributeError:
+-            # in case you replace the node dynamically e.g. with a dict
+-            return value
+-        except UnicodeEncodeError:
+-            return value
+-
+-    def construct_rt_sequence(self, node, seqtyp, deep=False):
+-        # type: (Any, Any, bool) -> Any
+-        if not isinstance(node, SequenceNode):
+-            raise ConstructorError(
+-                None, None, 'expected a sequence node, but found %s' % node.id, node.start_mark
+-            )
+-        ret_val = []
+-        if node.comment:
+-            seqtyp._yaml_add_comment(node.comment[:2])
+-            if len(node.comment) > 2:
+-                seqtyp.yaml_end_comment_extend(node.comment[2], clear=True)
+-        if node.anchor:
+-            from dynaconf.vendor.ruamel.yaml.serializer import templated_id
+-
+-            if not templated_id(node.anchor):
+-                seqtyp.yaml_set_anchor(node.anchor)
+-        for idx, child in enumerate(node.value):
+-            if child.comment:
+-                seqtyp._yaml_add_comment(child.comment, key=idx)
+-                child.comment = None  # if moved to sequence remove from child
+-            ret_val.append(self.construct_object(child, deep=deep))
+-            seqtyp._yaml_set_idx_line_col(
+-                idx, [child.start_mark.line, child.start_mark.column]
+-            )
+-        return ret_val
+-
+-    def flatten_mapping(self, node):
+-        # type: (Any) -> Any
+-        """
+-        This implements the merge key feature http://yaml.org/type/merge.html
+-        by inserting keys from the merge dict/list of dicts if not yet
+-        available in this node
+-        """
+-
+-        def constructed(value_node):
+-            # type: (Any) -> Any
+-            # If the contents of a merge are defined within the
+-            # merge marker, then they won't have been constructed
+-            # yet. But if they were already constructed, we need to use
+-            # the existing object.
+-            if value_node in self.constructed_objects:
+-                value = self.constructed_objects[value_node]
+-            else:
+-                value = self.construct_object(value_node, deep=False)
+-            return value
+-
+-        # merge = []
+-        merge_map_list = []  # type: List[Any]
+-        index = 0
+-        while index < len(node.value):
+-            key_node, value_node = node.value[index]
+-            if key_node.tag == u'tag:yaml.org,2002:merge':
+-                if merge_map_list:  # double << key
+-                    if self.allow_duplicate_keys:
+-                        del node.value[index]
+-                        index += 1
+-                        continue
+-                    args = [
+-                        'while constructing a mapping',
+-                        node.start_mark,
+-                        'found duplicate key "{}"'.format(key_node.value),
+-                        key_node.start_mark,
+-                        """
+-                        To suppress this check see:
+-                           http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys
+-                        """,
+-                        """\
+-                        Duplicate keys will become an error in future releases, and are errors
+-                        by default when using the new API.
+-                        """,
+-                    ]
+-                    if self.allow_duplicate_keys is None:
+-                        warnings.warn(DuplicateKeyFutureWarning(*args))
+-                    else:
+-                        raise DuplicateKeyError(*args)
+-                del node.value[index]
+-                if isinstance(value_node, MappingNode):
+-                    merge_map_list.append((index, constructed(value_node)))
+-                    # self.flatten_mapping(value_node)
+-                    # merge.extend(value_node.value)
+-                elif isinstance(value_node, SequenceNode):
+-                    # submerge = []
+-                    for subnode in value_node.value:
+-                        if not isinstance(subnode, MappingNode):
+-                            raise ConstructorError(
+-                                'while constructing a mapping',
+-                                node.start_mark,
+-                                'expected a mapping for merging, but found %s' % subnode.id,
+-                                subnode.start_mark,
+-                            )
+-                        merge_map_list.append((index, constructed(subnode)))
+-                    #     self.flatten_mapping(subnode)
+-                    #     submerge.append(subnode.value)
+-                    # submerge.reverse()
+-                    # for value in submerge:
+-                    #     merge.extend(value)
+-                else:
+-                    raise ConstructorError(
+-                        'while constructing a mapping',
+-                        node.start_mark,
+-                        'expected a mapping or list of mappings for merging, '
+-                        'but found %s' % value_node.id,
+-                        value_node.start_mark,
+-                    )
+-            elif key_node.tag == u'tag:yaml.org,2002:value':
+-                key_node.tag = u'tag:yaml.org,2002:str'
+-                index += 1
+-            else:
+-                index += 1
+-        return merge_map_list
+-        # if merge:
+-        #     node.value = merge + node.value
+-
+-    def _sentinel(self):
+-        # type: () -> None
+-        pass
+-
+-    def construct_mapping(self, node, maptyp, deep=False):  # type: ignore
+-        # type: (Any, Any, bool) -> Any
+-        if not isinstance(node, MappingNode):
+-            raise ConstructorError(
+-                None, None, 'expected a mapping node, but found %s' % node.id, node.start_mark
+-            )
+-        merge_map = self.flatten_mapping(node)
+-        # mapping = {}
+-        if node.comment:
+-            maptyp._yaml_add_comment(node.comment[:2])
+-            if len(node.comment) > 2:
+-                maptyp.yaml_end_comment_extend(node.comment[2], clear=True)
+-        if node.anchor:
+-            from dynaconf.vendor.ruamel.yaml.serializer import templated_id
+-
+-            if not templated_id(node.anchor):
+-                maptyp.yaml_set_anchor(node.anchor)
+-        last_key, last_value = None, self._sentinel
+-        for key_node, value_node in node.value:
+-            # keys can be list -> deep
+-            key = self.construct_object(key_node, deep=True)
+-            # lists are not hashable, but tuples are
+-            if not isinstance(key, Hashable):
+-                if isinstance(key, MutableSequence):
+-                    key_s = CommentedKeySeq(key)
+-                    if key_node.flow_style is True:
+-                        key_s.fa.set_flow_style()
+-                    elif key_node.flow_style is False:
+-                        key_s.fa.set_block_style()
+-                    key = key_s
+-                elif isinstance(key, MutableMapping):
+-                    key_m = CommentedKeyMap(key)
+-                    if key_node.flow_style is True:
+-                        key_m.fa.set_flow_style()
+-                    elif key_node.flow_style is False:
+-                        key_m.fa.set_block_style()
+-                    key = key_m
+-            if PY2:
+-                try:
+-                    hash(key)
+-                except TypeError as exc:
+-                    raise ConstructorError(
+-                        'while constructing a mapping',
+-                        node.start_mark,
+-                        'found unacceptable key (%s)' % exc,
+-                        key_node.start_mark,
+-                    )
+-            else:
+-                if not isinstance(key, Hashable):
+-                    raise ConstructorError(
+-                        'while constructing a mapping',
+-                        node.start_mark,
+-                        'found unhashable key',
+-                        key_node.start_mark,
+-                    )
+-            value = self.construct_object(value_node, deep=deep)
+-            if self.check_mapping_key(node, key_node, maptyp, key, value):
+-                if key_node.comment and len(key_node.comment) > 4 and key_node.comment[4]:
+-                    if last_value is None:
+-                        key_node.comment[0] = key_node.comment.pop(4)
+-                        maptyp._yaml_add_comment(key_node.comment, value=last_key)
+-                    else:
+-                        key_node.comment[2] = key_node.comment.pop(4)
+-                        maptyp._yaml_add_comment(key_node.comment, key=key)
+-                    key_node.comment = None
+-                if key_node.comment:
+-                    maptyp._yaml_add_comment(key_node.comment, key=key)
+-                if value_node.comment:
+-                    maptyp._yaml_add_comment(value_node.comment, value=key)
+-                maptyp._yaml_set_kv_line_col(
+-                    key,
+-                    [
+-                        key_node.start_mark.line,
+-                        key_node.start_mark.column,
+-                        value_node.start_mark.line,
+-                        value_node.start_mark.column,
+-                    ],
+-                )
+-                maptyp[key] = value
+-                last_key, last_value = key, value  # could use indexing
+-        # do this last, or <<: before a key will prevent insertion in instances
+-        # of collections.OrderedDict (as they have no __contains__
+-        if merge_map:
+-            maptyp.add_yaml_merge(merge_map)
+-
+-    def construct_setting(self, node, typ, deep=False):
+-        # type: (Any, Any, bool) -> Any
+-        if not isinstance(node, MappingNode):
+-            raise ConstructorError(
+-                None, None, 'expected a mapping node, but found %s' % node.id, node.start_mark
+-            )
+-        if node.comment:
+-            typ._yaml_add_comment(node.comment[:2])
+-            if len(node.comment) > 2:
+-                typ.yaml_end_comment_extend(node.comment[2], clear=True)
+-        if node.anchor:
+-            from dynaconf.vendor.ruamel.yaml.serializer import templated_id
+-
+-            if not templated_id(node.anchor):
+-                typ.yaml_set_anchor(node.anchor)
+-        for key_node, value_node in node.value:
+-            # keys can be list -> deep
+-            key = self.construct_object(key_node, deep=True)
+-            # lists are not hashable, but tuples are
+-            if not isinstance(key, Hashable):
+-                if isinstance(key, list):
+-                    key = tuple(key)
+-            if PY2:
+-                try:
+-                    hash(key)
+-                except TypeError as exc:
+-                    raise ConstructorError(
+-                        'while constructing a mapping',
+-                        node.start_mark,
+-                        'found unacceptable key (%s)' % exc,
+-                        key_node.start_mark,
+-                    )
+-            else:
+-                if not isinstance(key, Hashable):
+-                    raise ConstructorError(
+-                        'while constructing a mapping',
+-                        node.start_mark,
+-                        'found unhashable key',
+-                        key_node.start_mark,
+-                    )
+-            # construct but should be null
+-            value = self.construct_object(value_node, deep=deep)  # NOQA
+-            self.check_set_key(node, key_node, typ, key)
+-            if key_node.comment:
+-                typ._yaml_add_comment(key_node.comment, key=key)
+-            if value_node.comment:
+-                typ._yaml_add_comment(value_node.comment, value=key)
+-            typ.add(key)
+-
+-    def construct_yaml_seq(self, node):
+-        # type: (Any) -> Any
+-        data = CommentedSeq()
+-        data._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
+-        if node.comment:
+-            data._yaml_add_comment(node.comment)
+-        yield data
+-        data.extend(self.construct_rt_sequence(node, data))
+-        self.set_collection_style(data, node)
+-
+-    def construct_yaml_map(self, node):
+-        # type: (Any) -> Any
+-        data = CommentedMap()
+-        data._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
+-        yield data
+-        self.construct_mapping(node, data, deep=True)
+-        self.set_collection_style(data, node)
+-
+-    def set_collection_style(self, data, node):
+-        # type: (Any, Any) -> None
+-        if len(data) == 0:
+-            return
+-        if node.flow_style is True:
+-            data.fa.set_flow_style()
+-        elif node.flow_style is False:
+-            data.fa.set_block_style()
+-
+-    def construct_yaml_object(self, node, cls):
+-        # type: (Any, Any) -> Any
+-        data = cls.__new__(cls)
+-        yield data
+-        if hasattr(data, '__setstate__'):
+-            state = SafeConstructor.construct_mapping(self, node, deep=True)
+-            data.__setstate__(state)
+-        else:
+-            state = SafeConstructor.construct_mapping(self, node)
+-            data.__dict__.update(state)
+-
+-    def construct_yaml_omap(self, node):
+-        # type: (Any) -> Any
+-        # Note: we do now check for duplicate keys
+-        omap = CommentedOrderedMap()
+-        omap._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
+-        if node.flow_style is True:
+-            omap.fa.set_flow_style()
+-        elif node.flow_style is False:
+-            omap.fa.set_block_style()
+-        yield omap
+-        if node.comment:
+-            omap._yaml_add_comment(node.comment[:2])
+-            if len(node.comment) > 2:
+-                omap.yaml_end_comment_extend(node.comment[2], clear=True)
+-        if not isinstance(node, SequenceNode):
+-            raise ConstructorError(
+-                'while constructing an ordered map',
+-                node.start_mark,
+-                'expected a sequence, but found %s' % node.id,
+-                node.start_mark,
+-            )
+-        for subnode in node.value:
+-            if not isinstance(subnode, MappingNode):
+-                raise ConstructorError(
+-                    'while constructing an ordered map',
+-                    node.start_mark,
+-                    'expected a mapping of length 1, but found %s' % subnode.id,
+-                    subnode.start_mark,
+-                )
+-            if len(subnode.value) != 1:
+-                raise ConstructorError(
+-                    'while constructing an ordered map',
+-                    node.start_mark,
+-                    'expected a single mapping item, but found %d items' % len(subnode.value),
+-                    subnode.start_mark,
+-                )
+-            key_node, value_node = subnode.value[0]
+-            key = self.construct_object(key_node)
+-            assert key not in omap
+-            value = self.construct_object(value_node)
+-            if key_node.comment:
+-                omap._yaml_add_comment(key_node.comment, key=key)
+-            if subnode.comment:
+-                omap._yaml_add_comment(subnode.comment, key=key)
+-            if value_node.comment:
+-                omap._yaml_add_comment(value_node.comment, value=key)
+-            omap[key] = value
+-
+-    def construct_yaml_set(self, node):
+-        # type: (Any) -> Any
+-        data = CommentedSet()
+-        data._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
+-        yield data
+-        self.construct_setting(node, data)
+-
+-    def construct_undefined(self, node):
+-        # type: (Any) -> Any
+-        try:
+-            if isinstance(node, MappingNode):
+-                data = CommentedMap()
+-                data._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
+-                if node.flow_style is True:
+-                    data.fa.set_flow_style()
+-                elif node.flow_style is False:
+-                    data.fa.set_block_style()
+-                data.yaml_set_tag(node.tag)
+-                yield data
+-                if node.anchor:
+-                    data.yaml_set_anchor(node.anchor)
+-                self.construct_mapping(node, data)
+-                return
+-            elif isinstance(node, ScalarNode):
+-                data2 = TaggedScalar()
+-                data2.value = self.construct_scalar(node)
+-                data2.style = node.style
+-                data2.yaml_set_tag(node.tag)
+-                yield data2
+-                if node.anchor:
+-                    data2.yaml_set_anchor(node.anchor, always_dump=True)
+-                return
+-            elif isinstance(node, SequenceNode):
+-                data3 = CommentedSeq()
+-                data3._yaml_set_line_col(node.start_mark.line, node.start_mark.column)
+-                if node.flow_style is True:
+-                    data3.fa.set_flow_style()
+-                elif node.flow_style is False:
+-                    data3.fa.set_block_style()
+-                data3.yaml_set_tag(node.tag)
+-                yield data3
+-                if node.anchor:
+-                    data3.yaml_set_anchor(node.anchor)
+-                data3.extend(self.construct_sequence(node))
+-                return
+-        except:  # NOQA
+-            pass
+-        raise ConstructorError(
+-            None,
+-            None,
+-            'could not determine a constructor for the tag %r' % utf8(node.tag),
+-            node.start_mark,
+-        )
+-
+-    def construct_yaml_timestamp(self, node, values=None):
+-        # type: (Any, Any) -> Any
+-        try:
+-            match = self.timestamp_regexp.match(node.value)
+-        except TypeError:
+-            match = None
+-        if match is None:
+-            raise ConstructorError(
+-                None,
+-                None,
+-                'failed to construct timestamp from "{}"'.format(node.value),
+-                node.start_mark,
+-            )
+-        values = match.groupdict()
+-        if not values['hour']:
+-            return SafeConstructor.construct_yaml_timestamp(self, node, values)
+-        for part in ['t', 'tz_sign', 'tz_hour', 'tz_minute']:
+-            if values[part]:
+-                break
+-        else:
+-            return SafeConstructor.construct_yaml_timestamp(self, node, values)
+-        year = int(values['year'])
+-        month = int(values['month'])
+-        day = int(values['day'])
+-        hour = int(values['hour'])
+-        minute = int(values['minute'])
+-        second = int(values['second'])
+-        fraction = 0
+-        if values['fraction']:
+-            fraction_s = values['fraction'][:6]
+-            while len(fraction_s) < 6:
+-                fraction_s += '0'
+-            fraction = int(fraction_s)
+-            if len(values['fraction']) > 6 and int(values['fraction'][6]) > 4:
+-                fraction += 1
+-        delta = None
+-        if values['tz_sign']:
+-            tz_hour = int(values['tz_hour'])
+-            minutes = values['tz_minute']
+-            tz_minute = int(minutes) if minutes else 0
+-            delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
+-            if values['tz_sign'] == '-':
+-                delta = -delta
+-        if delta:
+-            dt = datetime.datetime(year, month, day, hour, minute)
+-            dt -= delta
+-            data = TimeStamp(dt.year, dt.month, dt.day, dt.hour, dt.minute, second, fraction)
+-            data._yaml['delta'] = delta
+-            tz = values['tz_sign'] + values['tz_hour']
+-            if values['tz_minute']:
+-                tz += ':' + values['tz_minute']
+-            data._yaml['tz'] = tz
+-        else:
+-            data = TimeStamp(year, month, day, hour, minute, second, fraction)
+-            if values['tz']:  # no delta
+-                data._yaml['tz'] = values['tz']
+-
+-        if values['t']:
+-            data._yaml['t'] = True
+-        return data
+-
+-    def construct_yaml_bool(self, node):
+-        # type: (Any) -> Any
+-        b = SafeConstructor.construct_yaml_bool(self, node)
+-        if node.anchor:
+-            return ScalarBoolean(b, anchor=node.anchor)
+-        return b
+-
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:null', RoundTripConstructor.construct_yaml_null
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:bool', RoundTripConstructor.construct_yaml_bool
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:int', RoundTripConstructor.construct_yaml_int
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:float', RoundTripConstructor.construct_yaml_float
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:binary', RoundTripConstructor.construct_yaml_binary
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:timestamp', RoundTripConstructor.construct_yaml_timestamp
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:omap', RoundTripConstructor.construct_yaml_omap
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:pairs', RoundTripConstructor.construct_yaml_pairs
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:set', RoundTripConstructor.construct_yaml_set
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:str', RoundTripConstructor.construct_yaml_str
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:seq', RoundTripConstructor.construct_yaml_seq
+-)
+-
+-RoundTripConstructor.add_constructor(
+-    u'tag:yaml.org,2002:map', RoundTripConstructor.construct_yaml_map
+-)
+-
+-RoundTripConstructor.add_constructor(None, RoundTripConstructor.construct_undefined)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/cyaml.py b/dynaconf/vendor_src/ruamel/yaml/cyaml.py
+deleted file mode 100644
+index 2db5b01..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/cyaml.py
++++ /dev/null
+@@ -1,185 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import
+-
+-from _ruamel_yaml import CParser, CEmitter  # type: ignore
+-
+-from .constructor import Constructor, BaseConstructor, SafeConstructor
+-from .representer import Representer, SafeRepresenter, BaseRepresenter
+-from .resolver import Resolver, BaseResolver
+-
+-if False:  # MYPY
+-    from typing import Any, Union, Optional  # NOQA
+-    from .compat import StreamTextType, StreamType, VersionType  # NOQA
+-
+-__all__ = ['CBaseLoader', 'CSafeLoader', 'CLoader', 'CBaseDumper', 'CSafeDumper', 'CDumper']
+-
+-
+-# this includes some hacks to solve the  usage of resolver by lower level
+-# parts of the parser
+-
+-
+-class CBaseLoader(CParser, BaseConstructor, BaseResolver):  # type: ignore
+-    def __init__(self, stream, version=None, preserve_quotes=None):
+-        # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
+-        CParser.__init__(self, stream)
+-        self._parser = self._composer = self
+-        BaseConstructor.__init__(self, loader=self)
+-        BaseResolver.__init__(self, loadumper=self)
+-        # self.descend_resolver = self._resolver.descend_resolver
+-        # self.ascend_resolver = self._resolver.ascend_resolver
+-        # self.resolve = self._resolver.resolve
+-
+-
+-class CSafeLoader(CParser, SafeConstructor, Resolver):  # type: ignore
+-    def __init__(self, stream, version=None, preserve_quotes=None):
+-        # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
+-        CParser.__init__(self, stream)
+-        self._parser = self._composer = self
+-        SafeConstructor.__init__(self, loader=self)
+-        Resolver.__init__(self, loadumper=self)
+-        # self.descend_resolver = self._resolver.descend_resolver
+-        # self.ascend_resolver = self._resolver.ascend_resolver
+-        # self.resolve = self._resolver.resolve
+-
+-
+-class CLoader(CParser, Constructor, Resolver):  # type: ignore
+-    def __init__(self, stream, version=None, preserve_quotes=None):
+-        # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
+-        CParser.__init__(self, stream)
+-        self._parser = self._composer = self
+-        Constructor.__init__(self, loader=self)
+-        Resolver.__init__(self, loadumper=self)
+-        # self.descend_resolver = self._resolver.descend_resolver
+-        # self.ascend_resolver = self._resolver.ascend_resolver
+-        # self.resolve = self._resolver.resolve
+-
+-
+-class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver):  # type: ignore
+-    def __init__(
+-        self,
+-        stream,
+-        default_style=None,
+-        default_flow_style=None,
+-        canonical=None,
+-        indent=None,
+-        width=None,
+-        allow_unicode=None,
+-        line_break=None,
+-        encoding=None,
+-        explicit_start=None,
+-        explicit_end=None,
+-        version=None,
+-        tags=None,
+-        block_seq_indent=None,
+-        top_level_colon_align=None,
+-        prefix_colon=None,
+-    ):
+-        # type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None   # NOQA
+-        CEmitter.__init__(
+-            self,
+-            stream,
+-            canonical=canonical,
+-            indent=indent,
+-            width=width,
+-            encoding=encoding,
+-            allow_unicode=allow_unicode,
+-            line_break=line_break,
+-            explicit_start=explicit_start,
+-            explicit_end=explicit_end,
+-            version=version,
+-            tags=tags,
+-        )
+-        self._emitter = self._serializer = self._representer = self
+-        BaseRepresenter.__init__(
+-            self,
+-            default_style=default_style,
+-            default_flow_style=default_flow_style,
+-            dumper=self,
+-        )
+-        BaseResolver.__init__(self, loadumper=self)
+-
+-
+-class CSafeDumper(CEmitter, SafeRepresenter, Resolver):  # type: ignore
+-    def __init__(
+-        self,
+-        stream,
+-        default_style=None,
+-        default_flow_style=None,
+-        canonical=None,
+-        indent=None,
+-        width=None,
+-        allow_unicode=None,
+-        line_break=None,
+-        encoding=None,
+-        explicit_start=None,
+-        explicit_end=None,
+-        version=None,
+-        tags=None,
+-        block_seq_indent=None,
+-        top_level_colon_align=None,
+-        prefix_colon=None,
+-    ):
+-        # type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None   # NOQA
+-        self._emitter = self._serializer = self._representer = self
+-        CEmitter.__init__(
+-            self,
+-            stream,
+-            canonical=canonical,
+-            indent=indent,
+-            width=width,
+-            encoding=encoding,
+-            allow_unicode=allow_unicode,
+-            line_break=line_break,
+-            explicit_start=explicit_start,
+-            explicit_end=explicit_end,
+-            version=version,
+-            tags=tags,
+-        )
+-        self._emitter = self._serializer = self._representer = self
+-        SafeRepresenter.__init__(
+-            self, default_style=default_style, default_flow_style=default_flow_style
+-        )
+-        Resolver.__init__(self)
+-
+-
+-class CDumper(CEmitter, Representer, Resolver):  # type: ignore
+-    def __init__(
+-        self,
+-        stream,
+-        default_style=None,
+-        default_flow_style=None,
+-        canonical=None,
+-        indent=None,
+-        width=None,
+-        allow_unicode=None,
+-        line_break=None,
+-        encoding=None,
+-        explicit_start=None,
+-        explicit_end=None,
+-        version=None,
+-        tags=None,
+-        block_seq_indent=None,
+-        top_level_colon_align=None,
+-        prefix_colon=None,
+-    ):
+-        # type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None   # NOQA
+-        CEmitter.__init__(
+-            self,
+-            stream,
+-            canonical=canonical,
+-            indent=indent,
+-            width=width,
+-            encoding=encoding,
+-            allow_unicode=allow_unicode,
+-            line_break=line_break,
+-            explicit_start=explicit_start,
+-            explicit_end=explicit_end,
+-            version=version,
+-            tags=tags,
+-        )
+-        self._emitter = self._serializer = self._representer = self
+-        Representer.__init__(
+-            self, default_style=default_style, default_flow_style=default_flow_style
+-        )
+-        Resolver.__init__(self)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/dumper.py b/dynaconf/vendor_src/ruamel/yaml/dumper.py
+deleted file mode 100644
+index a2cd7b4..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/dumper.py
++++ /dev/null
+@@ -1,221 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import
+-
+-from .emitter import Emitter
+-from .serializer import Serializer
+-from .representer import (
+-    Representer,
+-    SafeRepresenter,
+-    BaseRepresenter,
+-    RoundTripRepresenter,
+-)
+-from .resolver import Resolver, BaseResolver, VersionedResolver
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, List, Union, Optional  # NOQA
+-    from .compat import StreamType, VersionType  # NOQA
+-
+-__all__ = ['BaseDumper', 'SafeDumper', 'Dumper', 'RoundTripDumper']
+-
+-
+-class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver):
+-    def __init__(
+-        self,
+-        stream,
+-        default_style=None,
+-        default_flow_style=None,
+-        canonical=None,
+-        indent=None,
+-        width=None,
+-        allow_unicode=None,
+-        line_break=None,
+-        encoding=None,
+-        explicit_start=None,
+-        explicit_end=None,
+-        version=None,
+-        tags=None,
+-        block_seq_indent=None,
+-        top_level_colon_align=None,
+-        prefix_colon=None,
+-    ):
+-        # type: (Any, StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None   # NOQA
+-        Emitter.__init__(
+-            self,
+-            stream,
+-            canonical=canonical,
+-            indent=indent,
+-            width=width,
+-            allow_unicode=allow_unicode,
+-            line_break=line_break,
+-            block_seq_indent=block_seq_indent,
+-            dumper=self,
+-        )
+-        Serializer.__init__(
+-            self,
+-            encoding=encoding,
+-            explicit_start=explicit_start,
+-            explicit_end=explicit_end,
+-            version=version,
+-            tags=tags,
+-            dumper=self,
+-        )
+-        BaseRepresenter.__init__(
+-            self,
+-            default_style=default_style,
+-            default_flow_style=default_flow_style,
+-            dumper=self,
+-        )
+-        BaseResolver.__init__(self, loadumper=self)
+-
+-
+-class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver):
+-    def __init__(
+-        self,
+-        stream,
+-        default_style=None,
+-        default_flow_style=None,
+-        canonical=None,
+-        indent=None,
+-        width=None,
+-        allow_unicode=None,
+-        line_break=None,
+-        encoding=None,
+-        explicit_start=None,
+-        explicit_end=None,
+-        version=None,
+-        tags=None,
+-        block_seq_indent=None,
+-        top_level_colon_align=None,
+-        prefix_colon=None,
+-    ):
+-        # type: (StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None  # NOQA
+-        Emitter.__init__(
+-            self,
+-            stream,
+-            canonical=canonical,
+-            indent=indent,
+-            width=width,
+-            allow_unicode=allow_unicode,
+-            line_break=line_break,
+-            block_seq_indent=block_seq_indent,
+-            dumper=self,
+-        )
+-        Serializer.__init__(
+-            self,
+-            encoding=encoding,
+-            explicit_start=explicit_start,
+-            explicit_end=explicit_end,
+-            version=version,
+-            tags=tags,
+-            dumper=self,
+-        )
+-        SafeRepresenter.__init__(
+-            self,
+-            default_style=default_style,
+-            default_flow_style=default_flow_style,
+-            dumper=self,
+-        )
+-        Resolver.__init__(self, loadumper=self)
+-
+-
+-class Dumper(Emitter, Serializer, Representer, Resolver):
+-    def __init__(
+-        self,
+-        stream,
+-        default_style=None,
+-        default_flow_style=None,
+-        canonical=None,
+-        indent=None,
+-        width=None,
+-        allow_unicode=None,
+-        line_break=None,
+-        encoding=None,
+-        explicit_start=None,
+-        explicit_end=None,
+-        version=None,
+-        tags=None,
+-        block_seq_indent=None,
+-        top_level_colon_align=None,
+-        prefix_colon=None,
+-    ):
+-        # type: (StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None   # NOQA
+-        Emitter.__init__(
+-            self,
+-            stream,
+-            canonical=canonical,
+-            indent=indent,
+-            width=width,
+-            allow_unicode=allow_unicode,
+-            line_break=line_break,
+-            block_seq_indent=block_seq_indent,
+-            dumper=self,
+-        )
+-        Serializer.__init__(
+-            self,
+-            encoding=encoding,
+-            explicit_start=explicit_start,
+-            explicit_end=explicit_end,
+-            version=version,
+-            tags=tags,
+-            dumper=self,
+-        )
+-        Representer.__init__(
+-            self,
+-            default_style=default_style,
+-            default_flow_style=default_flow_style,
+-            dumper=self,
+-        )
+-        Resolver.__init__(self, loadumper=self)
+-
+-
+-class RoundTripDumper(Emitter, Serializer, RoundTripRepresenter, VersionedResolver):
+-    def __init__(
+-        self,
+-        stream,
+-        default_style=None,
+-        default_flow_style=None,
+-        canonical=None,
+-        indent=None,
+-        width=None,
+-        allow_unicode=None,
+-        line_break=None,
+-        encoding=None,
+-        explicit_start=None,
+-        explicit_end=None,
+-        version=None,
+-        tags=None,
+-        block_seq_indent=None,
+-        top_level_colon_align=None,
+-        prefix_colon=None,
+-    ):
+-        # type: (StreamType, Any, Optional[bool], Optional[int], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None  # NOQA
+-        Emitter.__init__(
+-            self,
+-            stream,
+-            canonical=canonical,
+-            indent=indent,
+-            width=width,
+-            allow_unicode=allow_unicode,
+-            line_break=line_break,
+-            block_seq_indent=block_seq_indent,
+-            top_level_colon_align=top_level_colon_align,
+-            prefix_colon=prefix_colon,
+-            dumper=self,
+-        )
+-        Serializer.__init__(
+-            self,
+-            encoding=encoding,
+-            explicit_start=explicit_start,
+-            explicit_end=explicit_end,
+-            version=version,
+-            tags=tags,
+-            dumper=self,
+-        )
+-        RoundTripRepresenter.__init__(
+-            self,
+-            default_style=default_style,
+-            default_flow_style=default_flow_style,
+-            dumper=self,
+-        )
+-        VersionedResolver.__init__(self, loader=self)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/emitter.py b/dynaconf/vendor_src/ruamel/yaml/emitter.py
+deleted file mode 100644
+index c1eff8b..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/emitter.py
++++ /dev/null
+@@ -1,1688 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import
+-from __future__ import print_function
+-
+-# Emitter expects events obeying the following grammar:
+-# stream ::= STREAM-START document* STREAM-END
+-# document ::= DOCUMENT-START node DOCUMENT-END
+-# node ::= SCALAR | sequence | mapping
+-# sequence ::= SEQUENCE-START node* SEQUENCE-END
+-# mapping ::= MAPPING-START (node node)* MAPPING-END
+-
+-import sys
+-from .error import YAMLError, YAMLStreamError
+-from .events import *  # NOQA
+-
+-# fmt: off
+-from .compat import utf8, text_type, PY2, nprint, dbg, DBG_EVENT, check_anchorname_char
+-# fmt: on
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, List, Union, Text, Tuple, Optional  # NOQA
+-    from .compat import StreamType  # NOQA
+-
+-__all__ = ['Emitter', 'EmitterError']
+-
+-
+-class EmitterError(YAMLError):
+-    pass
+-
+-
+-class ScalarAnalysis(object):
+-    def __init__(
+-        self,
+-        scalar,
+-        empty,
+-        multiline,
+-        allow_flow_plain,
+-        allow_block_plain,
+-        allow_single_quoted,
+-        allow_double_quoted,
+-        allow_block,
+-    ):
+-        # type: (Any, Any, Any, bool, bool, bool, bool, bool) -> None
+-        self.scalar = scalar
+-        self.empty = empty
+-        self.multiline = multiline
+-        self.allow_flow_plain = allow_flow_plain
+-        self.allow_block_plain = allow_block_plain
+-        self.allow_single_quoted = allow_single_quoted
+-        self.allow_double_quoted = allow_double_quoted
+-        self.allow_block = allow_block
+-
+-
+-class Indents(object):
+-    # replacement for the list based stack of None/int
+-    def __init__(self):
+-        # type: () -> None
+-        self.values = []  # type: List[Tuple[int, bool]]
+-
+-    def append(self, val, seq):
+-        # type: (Any, Any) -> None
+-        self.values.append((val, seq))
+-
+-    def pop(self):
+-        # type: () -> Any
+-        return self.values.pop()[0]
+-
+-    def last_seq(self):
+-        # type: () -> bool
+-        # return the seq(uence) value for the element added before the last one
+-        # in increase_indent()
+-        try:
+-            return self.values[-2][1]
+-        except IndexError:
+-            return False
+-
+-    def seq_flow_align(self, seq_indent, column):
+-        # type: (int, int) -> int
+-        # extra spaces because of dash
+-        if len(self.values) < 2 or not self.values[-1][1]:
+-            return 0
+-        # -1 for the dash
+-        base = self.values[-1][0] if self.values[-1][0] is not None else 0
+-        return base + seq_indent - column - 1
+-
+-    def __len__(self):
+-        # type: () -> int
+-        return len(self.values)
+-
+-
+-class Emitter(object):
+-    # fmt: off
+-    DEFAULT_TAG_PREFIXES = {
+-        u'!': u'!',
+-        u'tag:yaml.org,2002:': u'!!',
+-    }
+-    # fmt: on
+-
+-    MAX_SIMPLE_KEY_LENGTH = 128
+-
+-    def __init__(
+-        self,
+-        stream,
+-        canonical=None,
+-        indent=None,
+-        width=None,
+-        allow_unicode=None,
+-        line_break=None,
+-        block_seq_indent=None,
+-        top_level_colon_align=None,
+-        prefix_colon=None,
+-        brace_single_entry_mapping_in_flow_sequence=None,
+-        dumper=None,
+-    ):
+-        # type: (StreamType, Any, Optional[int], Optional[int], Optional[bool], Any, Optional[int], Optional[bool], Any, Optional[bool], Any) -> None  # NOQA
+-        self.dumper = dumper
+-        if self.dumper is not None and getattr(self.dumper, '_emitter', None) is None:
+-            self.dumper._emitter = self
+-        self.stream = stream
+-
+-        # Encoding can be overriden by STREAM-START.
+-        self.encoding = None  # type: Optional[Text]
+-        self.allow_space_break = None
+-
+-        # Emitter is a state machine with a stack of states to handle nested
+-        # structures.
+-        self.states = []  # type: List[Any]
+-        self.state = self.expect_stream_start  # type: Any
+-
+-        # Current event and the event queue.
+-        self.events = []  # type: List[Any]
+-        self.event = None  # type: Any
+-
+-        # The current indentation level and the stack of previous indents.
+-        self.indents = Indents()
+-        self.indent = None  # type: Optional[int]
+-
+-        # flow_context is an expanding/shrinking list consisting of '{' and '['
+-        # for each unclosed flow context. If empty list that means block context
+-        self.flow_context = []  # type: List[Text]
+-
+-        # Contexts.
+-        self.root_context = False
+-        self.sequence_context = False
+-        self.mapping_context = False
+-        self.simple_key_context = False
+-
+-        # Characteristics of the last emitted character:
+-        #  - current position.
+-        #  - is it a whitespace?
+-        #  - is it an indention character
+-        #    (indentation space, '-', '?', or ':')?
+-        self.line = 0
+-        self.column = 0
+-        self.whitespace = True
+-        self.indention = True
+-        self.compact_seq_seq = True  # dash after dash
+-        self.compact_seq_map = True  # key after dash
+-        # self.compact_ms = False   # dash after key, only when excplicit key with ?
+-        self.no_newline = None  # type: Optional[bool]  # set if directly after `- `
+-
+-        # Whether the document requires an explicit document end indicator
+-        self.open_ended = False
+-
+-        # colon handling
+-        self.colon = u':'
+-        self.prefixed_colon = self.colon if prefix_colon is None else prefix_colon + self.colon
+-        # single entry mappings in flow sequence
+-        self.brace_single_entry_mapping_in_flow_sequence = (
+-         brace_single_entry_mapping_in_flow_sequence
+-        )  # NOQA
+-
+-        # Formatting details.
+-        self.canonical = canonical
+-        self.allow_unicode = allow_unicode
+-        # set to False to get "\Uxxxxxxxx" for non-basic unicode like emojis
+-        self.unicode_supplementary = sys.maxunicode > 0xffff
+-        self.sequence_dash_offset = block_seq_indent if block_seq_indent else 0
+-        self.top_level_colon_align = top_level_colon_align
+-        self.best_sequence_indent = 2
+-        self.requested_indent = indent  # specific for literal zero indent
+-        if indent and 1 < indent < 10:
+-            self.best_sequence_indent = indent
+-        self.best_map_indent = self.best_sequence_indent
+-        # if self.best_sequence_indent < self.sequence_dash_offset + 1:
+-        #     self.best_sequence_indent = self.sequence_dash_offset + 1
+-        self.best_width = 80
+-        if width and width > self.best_sequence_indent * 2:
+-            self.best_width = width
+-        self.best_line_break = u'\n'  # type: Any
+-        if line_break in [u'\r', u'\n', u'\r\n']:
+-            self.best_line_break = line_break
+-
+-        # Tag prefixes.
+-        self.tag_prefixes = None  # type: Any
+-
+-        # Prepared anchor and tag.
+-        self.prepared_anchor = None  # type: Any
+-        self.prepared_tag = None  # type: Any
+-
+-        # Scalar analysis and style.
+-        self.analysis = None  # type: Any
+-        self.style = None  # type: Any
+-
+-        self.scalar_after_indicator = True  # write a scalar on the same line as `---`
+-
+-    @property
+-    def stream(self):
+-        # type: () -> Any
+-        try:
+-            return self._stream
+-        except AttributeError:
+-            raise YAMLStreamError('output stream needs to specified')
+-
+-    @stream.setter
+-    def stream(self, val):
+-        # type: (Any) -> None
+-        if val is None:
+-            return
+-        if not hasattr(val, 'write'):
+-            raise YAMLStreamError('stream argument needs to have a write() method')
+-        self._stream = val
+-
+-    @property
+-    def serializer(self):
+-        # type: () -> Any
+-        try:
+-            if hasattr(self.dumper, 'typ'):
+-                return self.dumper.serializer
+-            return self.dumper._serializer
+-        except AttributeError:
+-            return self  # cyaml
+-
+-    @property
+-    def flow_level(self):
+-        # type: () -> int
+-        return len(self.flow_context)
+-
+-    def dispose(self):
+-        # type: () -> None
+-        # Reset the state attributes (to clear self-references)
+-        self.states = []
+-        self.state = None
+-
+-    def emit(self, event):
+-        # type: (Any) -> None
+-        if dbg(DBG_EVENT):
+-            nprint(event)
+-        self.events.append(event)
+-        while not self.need_more_events():
+-            self.event = self.events.pop(0)
+-            self.state()
+-            self.event = None
+-
+-    # In some cases, we wait for a few next events before emitting.
+-
+-    def need_more_events(self):
+-        # type: () -> bool
+-        if not self.events:
+-            return True
+-        event = self.events[0]
+-        if isinstance(event, DocumentStartEvent):
+-            return self.need_events(1)
+-        elif isinstance(event, SequenceStartEvent):
+-            return self.need_events(2)
+-        elif isinstance(event, MappingStartEvent):
+-            return self.need_events(3)
+-        else:
+-            return False
+-
+-    def need_events(self, count):
+-        # type: (int) -> bool
+-        level = 0
+-        for event in self.events[1:]:
+-            if isinstance(event, (DocumentStartEvent, CollectionStartEvent)):
+-                level += 1
+-            elif isinstance(event, (DocumentEndEvent, CollectionEndEvent)):
+-                level -= 1
+-            elif isinstance(event, StreamEndEvent):
+-                level = -1
+-            if level < 0:
+-                return False
+-        return len(self.events) < count + 1
+-
+-    def increase_indent(self, flow=False, sequence=None, indentless=False):
+-        # type: (bool, Optional[bool], bool) -> None
+-        self.indents.append(self.indent, sequence)
+-        if self.indent is None:  # top level
+-            if flow:
+-                # self.indent = self.best_sequence_indent if self.indents.last_seq() else \
+-                #              self.best_map_indent
+-                # self.indent = self.best_sequence_indent
+-                self.indent = self.requested_indent
+-            else:
+-                self.indent = 0
+-        elif not indentless:
+-            self.indent += (
+-                self.best_sequence_indent if self.indents.last_seq() else self.best_map_indent
+-            )
+-            # if self.indents.last_seq():
+-            #     if self.indent == 0: # top level block sequence
+-            #         self.indent = self.best_sequence_indent - self.sequence_dash_offset
+-            #     else:
+-            #         self.indent += self.best_sequence_indent
+-            # else:
+-            #     self.indent += self.best_map_indent
+-
+-    # States.
+-
+-    # Stream handlers.
+-
+-    def expect_stream_start(self):
+-        # type: () -> None
+-        if isinstance(self.event, StreamStartEvent):
+-            if PY2:
+-                if self.event.encoding and not getattr(self.stream, 'encoding', None):
+-                    self.encoding = self.event.encoding
+-            else:
+-                if self.event.encoding and not hasattr(self.stream, 'encoding'):
+-                    self.encoding = self.event.encoding
+-            self.write_stream_start()
+-            self.state = self.expect_first_document_start
+-        else:
+-            raise EmitterError('expected StreamStartEvent, but got %s' % (self.event,))
+-
+-    def expect_nothing(self):
+-        # type: () -> None
+-        raise EmitterError('expected nothing, but got %s' % (self.event,))
+-
+-    # Document handlers.
+-
+-    def expect_first_document_start(self):
+-        # type: () -> Any
+-        return self.expect_document_start(first=True)
+-
+-    def expect_document_start(self, first=False):
+-        # type: (bool) -> None
+-        if isinstance(self.event, DocumentStartEvent):
+-            if (self.event.version or self.event.tags) and self.open_ended:
+-                self.write_indicator(u'...', True)
+-                self.write_indent()
+-            if self.event.version:
+-                version_text = self.prepare_version(self.event.version)
+-                self.write_version_directive(version_text)
+-            self.tag_prefixes = self.DEFAULT_TAG_PREFIXES.copy()
+-            if self.event.tags:
+-                handles = sorted(self.event.tags.keys())
+-                for handle in handles:
+-                    prefix = self.event.tags[handle]
+-                    self.tag_prefixes[prefix] = handle
+-                    handle_text = self.prepare_tag_handle(handle)
+-                    prefix_text = self.prepare_tag_prefix(prefix)
+-                    self.write_tag_directive(handle_text, prefix_text)
+-            implicit = (
+-                first
+-                and not self.event.explicit
+-                and not self.canonical
+-                and not self.event.version
+-                and not self.event.tags
+-                and not self.check_empty_document()
+-            )
+-            if not implicit:
+-                self.write_indent()
+-                self.write_indicator(u'---', True)
+-                if self.canonical:
+-                    self.write_indent()
+-            self.state = self.expect_document_root
+-        elif isinstance(self.event, StreamEndEvent):
+-            if self.open_ended:
+-                self.write_indicator(u'...', True)
+-                self.write_indent()
+-            self.write_stream_end()
+-            self.state = self.expect_nothing
+-        else:
+-            raise EmitterError('expected DocumentStartEvent, but got %s' % (self.event,))
+-
+-    def expect_document_end(self):
+-        # type: () -> None
+-        if isinstance(self.event, DocumentEndEvent):
+-            self.write_indent()
+-            if self.event.explicit:
+-                self.write_indicator(u'...', True)
+-                self.write_indent()
+-            self.flush_stream()
+-            self.state = self.expect_document_start
+-        else:
+-            raise EmitterError('expected DocumentEndEvent, but got %s' % (self.event,))
+-
+-    def expect_document_root(self):
+-        # type: () -> None
+-        self.states.append(self.expect_document_end)
+-        self.expect_node(root=True)
+-
+-    # Node handlers.
+-
+-    def expect_node(self, root=False, sequence=False, mapping=False, simple_key=False):
+-        # type: (bool, bool, bool, bool) -> None
+-        self.root_context = root
+-        self.sequence_context = sequence  # not used in PyYAML
+-        self.mapping_context = mapping
+-        self.simple_key_context = simple_key
+-        if isinstance(self.event, AliasEvent):
+-            self.expect_alias()
+-        elif isinstance(self.event, (ScalarEvent, CollectionStartEvent)):
+-            if (
+-                self.process_anchor(u'&')
+-                and isinstance(self.event, ScalarEvent)
+-                and self.sequence_context
+-            ):
+-                self.sequence_context = False
+-            if (
+-                root
+-                and isinstance(self.event, ScalarEvent)
+-                and not self.scalar_after_indicator
+-            ):
+-                self.write_indent()
+-            self.process_tag()
+-            if isinstance(self.event, ScalarEvent):
+-                # nprint('@', self.indention, self.no_newline, self.column)
+-                self.expect_scalar()
+-            elif isinstance(self.event, SequenceStartEvent):
+-                # nprint('@', self.indention, self.no_newline, self.column)
+-                i2, n2 = self.indention, self.no_newline  # NOQA
+-                if self.event.comment:
+-                    if self.event.flow_style is False and self.event.comment:
+-                        if self.write_post_comment(self.event):
+-                            self.indention = False
+-                            self.no_newline = True
+-                    if self.write_pre_comment(self.event):
+-                        self.indention = i2
+-                        self.no_newline = not self.indention
+-                if (
+-                    self.flow_level
+-                    or self.canonical
+-                    or self.event.flow_style
+-                    or self.check_empty_sequence()
+-                ):
+-                    self.expect_flow_sequence()
+-                else:
+-                    self.expect_block_sequence()
+-            elif isinstance(self.event, MappingStartEvent):
+-                if self.event.flow_style is False and self.event.comment:
+-                    self.write_post_comment(self.event)
+-                if self.event.comment and self.event.comment[1]:
+-                    self.write_pre_comment(self.event)
+-                if (
+-                    self.flow_level
+-                    or self.canonical
+-                    or self.event.flow_style
+-                    or self.check_empty_mapping()
+-                ):
+-                    self.expect_flow_mapping(single=self.event.nr_items == 1)
+-                else:
+-                    self.expect_block_mapping()
+-        else:
+-            raise EmitterError('expected NodeEvent, but got %s' % (self.event,))
+-
+-    def expect_alias(self):
+-        # type: () -> None
+-        if self.event.anchor is None:
+-            raise EmitterError('anchor is not specified for alias')
+-        self.process_anchor(u'*')
+-        self.state = self.states.pop()
+-
+-    def expect_scalar(self):
+-        # type: () -> None
+-        self.increase_indent(flow=True)
+-        self.process_scalar()
+-        self.indent = self.indents.pop()
+-        self.state = self.states.pop()
+-
+-    # Flow sequence handlers.
+-
+-    def expect_flow_sequence(self):
+-        # type: () -> None
+-        ind = self.indents.seq_flow_align(self.best_sequence_indent, self.column)
+-        self.write_indicator(u' ' * ind + u'[', True, whitespace=True)
+-        self.increase_indent(flow=True, sequence=True)
+-        self.flow_context.append('[')
+-        self.state = self.expect_first_flow_sequence_item
+-
+-    def expect_first_flow_sequence_item(self):
+-        # type: () -> None
+-        if isinstance(self.event, SequenceEndEvent):
+-            self.indent = self.indents.pop()
+-            popped = self.flow_context.pop()
+-            assert popped == '['
+-            self.write_indicator(u']', False)
+-            if self.event.comment and self.event.comment[0]:
+-                # eol comment on empty flow sequence
+-                self.write_post_comment(self.event)
+-            elif self.flow_level == 0:
+-                self.write_line_break()
+-            self.state = self.states.pop()
+-        else:
+-            if self.canonical or self.column > self.best_width:
+-                self.write_indent()
+-            self.states.append(self.expect_flow_sequence_item)
+-            self.expect_node(sequence=True)
+-
+-    def expect_flow_sequence_item(self):
+-        # type: () -> None
+-        if isinstance(self.event, SequenceEndEvent):
+-            self.indent = self.indents.pop()
+-            popped = self.flow_context.pop()
+-            assert popped == '['
+-            if self.canonical:
+-                self.write_indicator(u',', False)
+-                self.write_indent()
+-            self.write_indicator(u']', False)
+-            if self.event.comment and self.event.comment[0]:
+-                # eol comment on flow sequence
+-                self.write_post_comment(self.event)
+-            else:
+-                self.no_newline = False
+-            self.state = self.states.pop()
+-        else:
+-            self.write_indicator(u',', False)
+-            if self.canonical or self.column > self.best_width:
+-                self.write_indent()
+-            self.states.append(self.expect_flow_sequence_item)
+-            self.expect_node(sequence=True)
+-
+-    # Flow mapping handlers.
+-
+-    def expect_flow_mapping(self, single=False):
+-        # type: (Optional[bool]) -> None
+-        ind = self.indents.seq_flow_align(self.best_sequence_indent, self.column)
+-        map_init = u'{'
+-        if (
+-            single
+-            and self.flow_level
+-            and self.flow_context[-1] == '['
+-            and not self.canonical
+-            and not self.brace_single_entry_mapping_in_flow_sequence
+-        ):
+-            # single map item with flow context, no curly braces necessary
+-            map_init = u''
+-        self.write_indicator(u' ' * ind + map_init, True, whitespace=True)
+-        self.flow_context.append(map_init)
+-        self.increase_indent(flow=True, sequence=False)
+-        self.state = self.expect_first_flow_mapping_key
+-
+-    def expect_first_flow_mapping_key(self):
+-        # type: () -> None
+-        if isinstance(self.event, MappingEndEvent):
+-            self.indent = self.indents.pop()
+-            popped = self.flow_context.pop()
+-            assert popped == '{'  # empty flow mapping
+-            self.write_indicator(u'}', False)
+-            if self.event.comment and self.event.comment[0]:
+-                # eol comment on empty mapping
+-                self.write_post_comment(self.event)
+-            elif self.flow_level == 0:
+-                self.write_line_break()
+-            self.state = self.states.pop()
+-        else:
+-            if self.canonical or self.column > self.best_width:
+-                self.write_indent()
+-            if not self.canonical and self.check_simple_key():
+-                self.states.append(self.expect_flow_mapping_simple_value)
+-                self.expect_node(mapping=True, simple_key=True)
+-            else:
+-                self.write_indicator(u'?', True)
+-                self.states.append(self.expect_flow_mapping_value)
+-                self.expect_node(mapping=True)
+-
+-    def expect_flow_mapping_key(self):
+-        # type: () -> None
+-        if isinstance(self.event, MappingEndEvent):
+-            # if self.event.comment and self.event.comment[1]:
+-            #     self.write_pre_comment(self.event)
+-            self.indent = self.indents.pop()
+-            popped = self.flow_context.pop()
+-            assert popped in [u'{', u'']
+-            if self.canonical:
+-                self.write_indicator(u',', False)
+-                self.write_indent()
+-            if popped != u'':
+-                self.write_indicator(u'}', False)
+-            if self.event.comment and self.event.comment[0]:
+-                # eol comment on flow mapping, never reached on empty mappings
+-                self.write_post_comment(self.event)
+-            else:
+-                self.no_newline = False
+-            self.state = self.states.pop()
+-        else:
+-            self.write_indicator(u',', False)
+-            if self.canonical or self.column > self.best_width:
+-                self.write_indent()
+-            if not self.canonical and self.check_simple_key():
+-                self.states.append(self.expect_flow_mapping_simple_value)
+-                self.expect_node(mapping=True, simple_key=True)
+-            else:
+-                self.write_indicator(u'?', True)
+-                self.states.append(self.expect_flow_mapping_value)
+-                self.expect_node(mapping=True)
+-
+-    def expect_flow_mapping_simple_value(self):
+-        # type: () -> None
+-        self.write_indicator(self.prefixed_colon, False)
+-        self.states.append(self.expect_flow_mapping_key)
+-        self.expect_node(mapping=True)
+-
+-    def expect_flow_mapping_value(self):
+-        # type: () -> None
+-        if self.canonical or self.column > self.best_width:
+-            self.write_indent()
+-        self.write_indicator(self.prefixed_colon, True)
+-        self.states.append(self.expect_flow_mapping_key)
+-        self.expect_node(mapping=True)
+-
+-    # Block sequence handlers.
+-
+-    def expect_block_sequence(self):
+-        # type: () -> None
+-        if self.mapping_context:
+-            indentless = not self.indention
+-        else:
+-            indentless = False
+-            if not self.compact_seq_seq and self.column != 0:
+-                self.write_line_break()
+-        self.increase_indent(flow=False, sequence=True, indentless=indentless)
+-        self.state = self.expect_first_block_sequence_item
+-
+-    def expect_first_block_sequence_item(self):
+-        # type: () -> Any
+-        return self.expect_block_sequence_item(first=True)
+-
+-    def expect_block_sequence_item(self, first=False):
+-        # type: (bool) -> None
+-        if not first and isinstance(self.event, SequenceEndEvent):
+-            if self.event.comment and self.event.comment[1]:
+-                # final comments on a block list e.g. empty line
+-                self.write_pre_comment(self.event)
+-            self.indent = self.indents.pop()
+-            self.state = self.states.pop()
+-            self.no_newline = False
+-        else:
+-            if self.event.comment and self.event.comment[1]:
+-                self.write_pre_comment(self.event)
+-            nonl = self.no_newline if self.column == 0 else False
+-            self.write_indent()
+-            ind = self.sequence_dash_offset  # if  len(self.indents) > 1 else 0
+-            self.write_indicator(u' ' * ind + u'-', True, indention=True)
+-            if nonl or self.sequence_dash_offset + 2 > self.best_sequence_indent:
+-                self.no_newline = True
+-            self.states.append(self.expect_block_sequence_item)
+-            self.expect_node(sequence=True)
+-
+-    # Block mapping handlers.
+-
+-    def expect_block_mapping(self):
+-        # type: () -> None
+-        if not self.mapping_context and not (self.compact_seq_map or self.column == 0):
+-            self.write_line_break()
+-        self.increase_indent(flow=False, sequence=False)
+-        self.state = self.expect_first_block_mapping_key
+-
+-    def expect_first_block_mapping_key(self):
+-        # type: () -> None
+-        return self.expect_block_mapping_key(first=True)
+-
+-    def expect_block_mapping_key(self, first=False):
+-        # type: (Any) -> None
+-        if not first and isinstance(self.event, MappingEndEvent):
+-            if self.event.comment and self.event.comment[1]:
+-                # final comments from a doc
+-                self.write_pre_comment(self.event)
+-            self.indent = self.indents.pop()
+-            self.state = self.states.pop()
+-        else:
+-            if self.event.comment and self.event.comment[1]:
+-                # final comments from a doc
+-                self.write_pre_comment(self.event)
+-            self.write_indent()
+-            if self.check_simple_key():
+-                if not isinstance(
+-                    self.event, (SequenceStartEvent, MappingStartEvent)
+-                ):  # sequence keys
+-                    try:
+-                        if self.event.style == '?':
+-                            self.write_indicator(u'?', True, indention=True)
+-                    except AttributeError:  # aliases have no style
+-                        pass
+-                self.states.append(self.expect_block_mapping_simple_value)
+-                self.expect_node(mapping=True, simple_key=True)
+-                if isinstance(self.event, AliasEvent):
+-                    self.stream.write(u' ')
+-            else:
+-                self.write_indicator(u'?', True, indention=True)
+-                self.states.append(self.expect_block_mapping_value)
+-                self.expect_node(mapping=True)
+-
+-    def expect_block_mapping_simple_value(self):
+-        # type: () -> None
+-        if getattr(self.event, 'style', None) != '?':
+-            # prefix = u''
+-            if self.indent == 0 and self.top_level_colon_align is not None:
+-                # write non-prefixed colon
+-                c = u' ' * (self.top_level_colon_align - self.column) + self.colon
+-            else:
+-                c = self.prefixed_colon
+-            self.write_indicator(c, False)
+-        self.states.append(self.expect_block_mapping_key)
+-        self.expect_node(mapping=True)
+-
+-    def expect_block_mapping_value(self):
+-        # type: () -> None
+-        self.write_indent()
+-        self.write_indicator(self.prefixed_colon, True, indention=True)
+-        self.states.append(self.expect_block_mapping_key)
+-        self.expect_node(mapping=True)
+-
+-    # Checkers.
+-
+-    def check_empty_sequence(self):
+-        # type: () -> bool
+-        return (
+-            isinstance(self.event, SequenceStartEvent)
+-            and bool(self.events)
+-            and isinstance(self.events[0], SequenceEndEvent)
+-        )
+-
+-    def check_empty_mapping(self):
+-        # type: () -> bool
+-        return (
+-            isinstance(self.event, MappingStartEvent)
+-            and bool(self.events)
+-            and isinstance(self.events[0], MappingEndEvent)
+-        )
+-
+-    def check_empty_document(self):
+-        # type: () -> bool
+-        if not isinstance(self.event, DocumentStartEvent) or not self.events:
+-            return False
+-        event = self.events[0]
+-        return (
+-            isinstance(event, ScalarEvent)
+-            and event.anchor is None
+-            and event.tag is None
+-            and event.implicit
+-            and event.value == ""
+-        )
+-
+-    def check_simple_key(self):
+-        # type: () -> bool
+-        length = 0
+-        if isinstance(self.event, NodeEvent) and self.event.anchor is not None:
+-            if self.prepared_anchor is None:
+-                self.prepared_anchor = self.prepare_anchor(self.event.anchor)
+-            length += len(self.prepared_anchor)
+-        if (
+-            isinstance(self.event, (ScalarEvent, CollectionStartEvent))
+-            and self.event.tag is not None
+-        ):
+-            if self.prepared_tag is None:
+-                self.prepared_tag = self.prepare_tag(self.event.tag)
+-            length += len(self.prepared_tag)
+-        if isinstance(self.event, ScalarEvent):
+-            if self.analysis is None:
+-                self.analysis = self.analyze_scalar(self.event.value)
+-            length += len(self.analysis.scalar)
+-        return length < self.MAX_SIMPLE_KEY_LENGTH and (
+-            isinstance(self.event, AliasEvent)
+-            or (isinstance(self.event, SequenceStartEvent) and self.event.flow_style is True)
+-            or (isinstance(self.event, MappingStartEvent) and self.event.flow_style is True)
+-            or (
+-                isinstance(self.event, ScalarEvent)
+-                # if there is an explicit style for an empty string, it is a simple key
+-                and not (self.analysis.empty and self.style and self.style not in '\'"')
+-                and not self.analysis.multiline
+-            )
+-            or self.check_empty_sequence()
+-            or self.check_empty_mapping()
+-        )
+-
+-    # Anchor, Tag, and Scalar processors.
+-
+-    def process_anchor(self, indicator):
+-        # type: (Any) -> bool
+-        if self.event.anchor is None:
+-            self.prepared_anchor = None
+-            return False
+-        if self.prepared_anchor is None:
+-            self.prepared_anchor = self.prepare_anchor(self.event.anchor)
+-        if self.prepared_anchor:
+-            self.write_indicator(indicator + self.prepared_anchor, True)
+-            # issue 288
+-            self.no_newline = False
+-        self.prepared_anchor = None
+-        return True
+-
+-    def process_tag(self):
+-        # type: () -> None
+-        tag = self.event.tag
+-        if isinstance(self.event, ScalarEvent):
+-            if self.style is None:
+-                self.style = self.choose_scalar_style()
+-            if (not self.canonical or tag is None) and (
+-                (self.style == "" and self.event.implicit[0])
+-                or (self.style != "" and self.event.implicit[1])
+-            ):
+-                self.prepared_tag = None
+-                return
+-            if self.event.implicit[0] and tag is None:
+-                tag = u'!'
+-                self.prepared_tag = None
+-        else:
+-            if (not self.canonical or tag is None) and self.event.implicit:
+-                self.prepared_tag = None
+-                return
+-        if tag is None:
+-            raise EmitterError('tag is not specified')
+-        if self.prepared_tag is None:
+-            self.prepared_tag = self.prepare_tag(tag)
+-        if self.prepared_tag:
+-            self.write_indicator(self.prepared_tag, True)
+-            if (
+-                self.sequence_context
+-                and not self.flow_level
+-                and isinstance(self.event, ScalarEvent)
+-            ):
+-                self.no_newline = True
+-        self.prepared_tag = None
+-
+-    def choose_scalar_style(self):
+-        # type: () -> Any
+-        if self.analysis is None:
+-            self.analysis = self.analyze_scalar(self.event.value)
+-        if self.event.style == '"' or self.canonical:
+-            return '"'
+-        if (not self.event.style or self.event.style == '?') and (
+-            self.event.implicit[0] or not self.event.implicit[2]
+-        ):
+-            if not (
+-                self.simple_key_context and (self.analysis.empty or self.analysis.multiline)
+-            ) and (
+-                self.flow_level
+-                and self.analysis.allow_flow_plain
+-                or (not self.flow_level and self.analysis.allow_block_plain)
+-            ):
+-                return ""
+-        self.analysis.allow_block = True
+-        if self.event.style and self.event.style in '|>':
+-            if (
+-                not self.flow_level
+-                and not self.simple_key_context
+-                and self.analysis.allow_block
+-            ):
+-                return self.event.style
+-        if not self.event.style and self.analysis.allow_double_quoted:
+-            if "'" in self.event.value or '\n' in self.event.value:
+-                return '"'
+-        if not self.event.style or self.event.style == "'":
+-            if self.analysis.allow_single_quoted and not (
+-                self.simple_key_context and self.analysis.multiline
+-            ):
+-                return "'"
+-        return '"'
+-
+-    def process_scalar(self):
+-        # type: () -> None
+-        if self.analysis is None:
+-            self.analysis = self.analyze_scalar(self.event.value)
+-        if self.style is None:
+-            self.style = self.choose_scalar_style()
+-        split = not self.simple_key_context
+-        # if self.analysis.multiline and split    \
+-        #         and (not self.style or self.style in '\'\"'):
+-        #     self.write_indent()
+-        # nprint('xx', self.sequence_context, self.flow_level)
+-        if self.sequence_context and not self.flow_level:
+-            self.write_indent()
+-        if self.style == '"':
+-            self.write_double_quoted(self.analysis.scalar, split)
+-        elif self.style == "'":
+-            self.write_single_quoted(self.analysis.scalar, split)
+-        elif self.style == '>':
+-            self.write_folded(self.analysis.scalar)
+-        elif self.style == '|':
+-            self.write_literal(self.analysis.scalar, self.event.comment)
+-        else:
+-            self.write_plain(self.analysis.scalar, split)
+-        self.analysis = None
+-        self.style = None
+-        if self.event.comment:
+-            self.write_post_comment(self.event)
+-
+-    # Analyzers.
+-
+-    def prepare_version(self, version):
+-        # type: (Any) -> Any
+-        major, minor = version
+-        if major != 1:
+-            raise EmitterError('unsupported YAML version: %d.%d' % (major, minor))
+-        return u'%d.%d' % (major, minor)
+-
+-    def prepare_tag_handle(self, handle):
+-        # type: (Any) -> Any
+-        if not handle:
+-            raise EmitterError('tag handle must not be empty')
+-        if handle[0] != u'!' or handle[-1] != u'!':
+-            raise EmitterError("tag handle must start and end with '!': %r" % (utf8(handle)))
+-        for ch in handle[1:-1]:
+-            if not (
+-                u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z' or ch in u'-_'
+-            ):
+-                raise EmitterError(
+-                    'invalid character %r in the tag handle: %r' % (utf8(ch), utf8(handle))
+-                )
+-        return handle
+-
+-    def prepare_tag_prefix(self, prefix):
+-        # type: (Any) -> Any
+-        if not prefix:
+-            raise EmitterError('tag prefix must not be empty')
+-        chunks = []  # type: List[Any]
+-        start = end = 0
+-        if prefix[0] == u'!':
+-            end = 1
+-        ch_set = u"-;/?:@&=+$,_.~*'()[]"
+-        if self.dumper:
+-            version = getattr(self.dumper, 'version', (1, 2))
+-            if version is None or version >= (1, 2):
+-                ch_set += u'#'
+-        while end < len(prefix):
+-            ch = prefix[end]
+-            if u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z' or ch in ch_set:
+-                end += 1
+-            else:
+-                if start < end:
+-                    chunks.append(prefix[start:end])
+-                start = end = end + 1
+-                data = utf8(ch)
+-                for ch in data:
+-                    chunks.append(u'%%%02X' % ord(ch))
+-        if start < end:
+-            chunks.append(prefix[start:end])
+-        return "".join(chunks)
+-
+-    def prepare_tag(self, tag):
+-        # type: (Any) -> Any
+-        if not tag:
+-            raise EmitterError('tag must not be empty')
+-        if tag == u'!':
+-            return tag
+-        handle = None
+-        suffix = tag
+-        prefixes = sorted(self.tag_prefixes.keys())
+-        for prefix in prefixes:
+-            if tag.startswith(prefix) and (prefix == u'!' or len(prefix) < len(tag)):
+-                handle = self.tag_prefixes[prefix]
+-                suffix = tag[len(prefix) :]
+-        chunks = []  # type: List[Any]
+-        start = end = 0
+-        ch_set = u"-;/?:@&=+$,_.~*'()[]"
+-        if self.dumper:
+-            version = getattr(self.dumper, 'version', (1, 2))
+-            if version is None or version >= (1, 2):
+-                ch_set += u'#'
+-        while end < len(suffix):
+-            ch = suffix[end]
+-            if (
+-                u'0' <= ch <= u'9'
+-                or u'A' <= ch <= u'Z'
+-                or u'a' <= ch <= u'z'
+-                or ch in ch_set
+-                or (ch == u'!' and handle != u'!')
+-            ):
+-                end += 1
+-            else:
+-                if start < end:
+-                    chunks.append(suffix[start:end])
+-                start = end = end + 1
+-                data = utf8(ch)
+-                for ch in data:
+-                    chunks.append(u'%%%02X' % ord(ch))
+-        if start < end:
+-            chunks.append(suffix[start:end])
+-        suffix_text = "".join(chunks)
+-        if handle:
+-            return u'%s%s' % (handle, suffix_text)
+-        else:
+-            return u'!<%s>' % suffix_text
+-
+-    def prepare_anchor(self, anchor):
+-        # type: (Any) -> Any
+-        if not anchor:
+-            raise EmitterError('anchor must not be empty')
+-        for ch in anchor:
+-            if not check_anchorname_char(ch):
+-                raise EmitterError(
+-                    'invalid character %r in the anchor: %r' % (utf8(ch), utf8(anchor))
+-                )
+-        return anchor
+-
+-    def analyze_scalar(self, scalar):
+-        # type: (Any) -> Any
+-        # Empty scalar is a special case.
+-        if not scalar:
+-            return ScalarAnalysis(
+-                scalar=scalar,
+-                empty=True,
+-                multiline=False,
+-                allow_flow_plain=False,
+-                allow_block_plain=True,
+-                allow_single_quoted=True,
+-                allow_double_quoted=True,
+-                allow_block=False,
+-            )
+-
+-        # Indicators and special characters.
+-        block_indicators = False
+-        flow_indicators = False
+-        line_breaks = False
+-        special_characters = False
+-
+-        # Important whitespace combinations.
+-        leading_space = False
+-        leading_break = False
+-        trailing_space = False
+-        trailing_break = False
+-        break_space = False
+-        space_break = False
+-
+-        # Check document indicators.
+-        if scalar.startswith(u'---') or scalar.startswith(u'...'):
+-            block_indicators = True
+-            flow_indicators = True
+-
+-        # First character or preceded by a whitespace.
+-        preceeded_by_whitespace = True
+-
+-        # Last character or followed by a whitespace.
+-        followed_by_whitespace = len(scalar) == 1 or scalar[1] in u'\0 \t\r\n\x85\u2028\u2029'
+-
+-        # The previous character is a space.
+-        previous_space = False
+-
+-        # The previous character is a break.
+-        previous_break = False
+-
+-        index = 0
+-        while index < len(scalar):
+-            ch = scalar[index]
+-
+-            # Check for indicators.
+-            if index == 0:
+-                # Leading indicators are special characters.
+-                if ch in u'#,[]{}&*!|>\'"%@`':
+-                    flow_indicators = True
+-                    block_indicators = True
+-                if ch in u'?:':  # ToDo
+-                    if self.serializer.use_version == (1, 1):
+-                        flow_indicators = True
+-                    elif len(scalar) == 1:  # single character
+-                        flow_indicators = True
+-                    if followed_by_whitespace:
+-                        block_indicators = True
+-                if ch == u'-' and followed_by_whitespace:
+-                    flow_indicators = True
+-                    block_indicators = True
+-            else:
+-                # Some indicators cannot appear within a scalar as well.
+-                if ch in u',[]{}':  # http://yaml.org/spec/1.2/spec.html#id2788859
+-                    flow_indicators = True
+-                if ch == u'?' and self.serializer.use_version == (1, 1):
+-                    flow_indicators = True
+-                if ch == u':':
+-                    if followed_by_whitespace:
+-                        flow_indicators = True
+-                        block_indicators = True
+-                if ch == u'#' and preceeded_by_whitespace:
+-                    flow_indicators = True
+-                    block_indicators = True
+-
+-            # Check for line breaks, special, and unicode characters.
+-            if ch in u'\n\x85\u2028\u2029':
+-                line_breaks = True
+-            if not (ch == u'\n' or u'\x20' <= ch <= u'\x7E'):
+-                if (
+-                    ch == u'\x85'
+-                    or u'\xA0' <= ch <= u'\uD7FF'
+-                    or u'\uE000' <= ch <= u'\uFFFD'
+-                    or (self.unicode_supplementary and (u'\U00010000' <= ch <= u'\U0010FFFF'))
+-                ) and ch != u'\uFEFF':
+-                    # unicode_characters = True
+-                    if not self.allow_unicode:
+-                        special_characters = True
+-                else:
+-                    special_characters = True
+-
+-            # Detect important whitespace combinations.
+-            if ch == u' ':
+-                if index == 0:
+-                    leading_space = True
+-                if index == len(scalar) - 1:
+-                    trailing_space = True
+-                if previous_break:
+-                    break_space = True
+-                previous_space = True
+-                previous_break = False
+-            elif ch in u'\n\x85\u2028\u2029':
+-                if index == 0:
+-                    leading_break = True
+-                if index == len(scalar) - 1:
+-                    trailing_break = True
+-                if previous_space:
+-                    space_break = True
+-                previous_space = False
+-                previous_break = True
+-            else:
+-                previous_space = False
+-                previous_break = False
+-
+-            # Prepare for the next character.
+-            index += 1
+-            preceeded_by_whitespace = ch in u'\0 \t\r\n\x85\u2028\u2029'
+-            followed_by_whitespace = (
+-                index + 1 >= len(scalar) or scalar[index + 1] in u'\0 \t\r\n\x85\u2028\u2029'
+-            )
+-
+-        # Let's decide what styles are allowed.
+-        allow_flow_plain = True
+-        allow_block_plain = True
+-        allow_single_quoted = True
+-        allow_double_quoted = True
+-        allow_block = True
+-
+-        # Leading and trailing whitespaces are bad for plain scalars.
+-        if leading_space or leading_break or trailing_space or trailing_break:
+-            allow_flow_plain = allow_block_plain = False
+-
+-        # We do not permit trailing spaces for block scalars.
+-        if trailing_space:
+-            allow_block = False
+-
+-        # Spaces at the beginning of a new line are only acceptable for block
+-        # scalars.
+-        if break_space:
+-            allow_flow_plain = allow_block_plain = allow_single_quoted = False
+-
+-        # Spaces followed by breaks, as well as special character are only
+-        # allowed for double quoted scalars.
+-        if special_characters:
+-            allow_flow_plain = allow_block_plain = allow_single_quoted = allow_block = False
+-        elif space_break:
+-            allow_flow_plain = allow_block_plain = allow_single_quoted = False
+-            if not self.allow_space_break:
+-                allow_block = False
+-
+-        # Although the plain scalar writer supports breaks, we never emit
+-        # multiline plain scalars.
+-        if line_breaks:
+-            allow_flow_plain = allow_block_plain = False
+-
+-        # Flow indicators are forbidden for flow plain scalars.
+-        if flow_indicators:
+-            allow_flow_plain = False
+-
+-        # Block indicators are forbidden for block plain scalars.
+-        if block_indicators:
+-            allow_block_plain = False
+-
+-        return ScalarAnalysis(
+-            scalar=scalar,
+-            empty=False,
+-            multiline=line_breaks,
+-            allow_flow_plain=allow_flow_plain,
+-            allow_block_plain=allow_block_plain,
+-            allow_single_quoted=allow_single_quoted,
+-            allow_double_quoted=allow_double_quoted,
+-            allow_block=allow_block,
+-        )
+-
+-    # Writers.
+-
+-    def flush_stream(self):
+-        # type: () -> None
+-        if hasattr(self.stream, 'flush'):
+-            self.stream.flush()
+-
+-    def write_stream_start(self):
+-        # type: () -> None
+-        # Write BOM if needed.
+-        if self.encoding and self.encoding.startswith('utf-16'):
+-            self.stream.write(u'\uFEFF'.encode(self.encoding))
+-
+-    def write_stream_end(self):
+-        # type: () -> None
+-        self.flush_stream()
+-
+-    def write_indicator(self, indicator, need_whitespace, whitespace=False, indention=False):
+-        # type: (Any, Any, bool, bool) -> None
+-        if self.whitespace or not need_whitespace:
+-            data = indicator
+-        else:
+-            data = u' ' + indicator
+-        self.whitespace = whitespace
+-        self.indention = self.indention and indention
+-        self.column += len(data)
+-        self.open_ended = False
+-        if bool(self.encoding):
+-            data = data.encode(self.encoding)
+-        self.stream.write(data)
+-
+-    def write_indent(self):
+-        # type: () -> None
+-        indent = self.indent or 0
+-        if (
+-            not self.indention
+-            or self.column > indent
+-            or (self.column == indent and not self.whitespace)
+-        ):
+-            if bool(self.no_newline):
+-                self.no_newline = False
+-            else:
+-                self.write_line_break()
+-        if self.column < indent:
+-            self.whitespace = True
+-            data = u' ' * (indent - self.column)
+-            self.column = indent
+-            if self.encoding:
+-                data = data.encode(self.encoding)
+-            self.stream.write(data)
+-
+-    def write_line_break(self, data=None):
+-        # type: (Any) -> None
+-        if data is None:
+-            data = self.best_line_break
+-        self.whitespace = True
+-        self.indention = True
+-        self.line += 1
+-        self.column = 0
+-        if bool(self.encoding):
+-            data = data.encode(self.encoding)
+-        self.stream.write(data)
+-
+-    def write_version_directive(self, version_text):
+-        # type: (Any) -> None
+-        data = u'%%YAML %s' % version_text
+-        if self.encoding:
+-            data = data.encode(self.encoding)
+-        self.stream.write(data)
+-        self.write_line_break()
+-
+-    def write_tag_directive(self, handle_text, prefix_text):
+-        # type: (Any, Any) -> None
+-        data = u'%%TAG %s %s' % (handle_text, prefix_text)
+-        if self.encoding:
+-            data = data.encode(self.encoding)
+-        self.stream.write(data)
+-        self.write_line_break()
+-
+-    # Scalar streams.
+-
+-    def write_single_quoted(self, text, split=True):
+-        # type: (Any, Any) -> None
+-        if self.root_context:
+-            if self.requested_indent is not None:
+-                self.write_line_break()
+-                if self.requested_indent != 0:
+-                    self.write_indent()
+-        self.write_indicator(u"'", True)
+-        spaces = False
+-        breaks = False
+-        start = end = 0
+-        while end <= len(text):
+-            ch = None
+-            if end < len(text):
+-                ch = text[end]
+-            if spaces:
+-                if ch is None or ch != u' ':
+-                    if (
+-                        start + 1 == end
+-                        and self.column > self.best_width
+-                        and split
+-                        and start != 0
+-                        and end != len(text)
+-                    ):
+-                        self.write_indent()
+-                    else:
+-                        data = text[start:end]
+-                        self.column += len(data)
+-                        if bool(self.encoding):
+-                            data = data.encode(self.encoding)
+-                        self.stream.write(data)
+-                    start = end
+-            elif breaks:
+-                if ch is None or ch not in u'\n\x85\u2028\u2029':
+-                    if text[start] == u'\n':
+-                        self.write_line_break()
+-                    for br in text[start:end]:
+-                        if br == u'\n':
+-                            self.write_line_break()
+-                        else:
+-                            self.write_line_break(br)
+-                    self.write_indent()
+-                    start = end
+-            else:
+-                if ch is None or ch in u' \n\x85\u2028\u2029' or ch == u"'":
+-                    if start < end:
+-                        data = text[start:end]
+-                        self.column += len(data)
+-                        if bool(self.encoding):
+-                            data = data.encode(self.encoding)
+-                        self.stream.write(data)
+-                        start = end
+-            if ch == u"'":
+-                data = u"''"
+-                self.column += 2
+-                if bool(self.encoding):
+-                    data = data.encode(self.encoding)
+-                self.stream.write(data)
+-                start = end + 1
+-            if ch is not None:
+-                spaces = ch == u' '
+-                breaks = ch in u'\n\x85\u2028\u2029'
+-            end += 1
+-        self.write_indicator(u"'", False)
+-
+-    ESCAPE_REPLACEMENTS = {
+-        u'\0': u'0',
+-        u'\x07': u'a',
+-        u'\x08': u'b',
+-        u'\x09': u't',
+-        u'\x0A': u'n',
+-        u'\x0B': u'v',
+-        u'\x0C': u'f',
+-        u'\x0D': u'r',
+-        u'\x1B': u'e',
+-        u'"': u'"',
+-        u'\\': u'\\',
+-        u'\x85': u'N',
+-        u'\xA0': u'_',
+-        u'\u2028': u'L',
+-        u'\u2029': u'P',
+-    }
+-
+-    def write_double_quoted(self, text, split=True):
+-        # type: (Any, Any) -> None
+-        if self.root_context:
+-            if self.requested_indent is not None:
+-                self.write_line_break()
+-                if self.requested_indent != 0:
+-                    self.write_indent()
+-        self.write_indicator(u'"', True)
+-        start = end = 0
+-        while end <= len(text):
+-            ch = None
+-            if end < len(text):
+-                ch = text[end]
+-            if (
+-                ch is None
+-                or ch in u'"\\\x85\u2028\u2029\uFEFF'
+-                or not (
+-                    u'\x20' <= ch <= u'\x7E'
+-                    or (
+-                        self.allow_unicode
+-                        and (u'\xA0' <= ch <= u'\uD7FF' or u'\uE000' <= ch <= u'\uFFFD')
+-                    )
+-                )
+-            ):
+-                if start < end:
+-                    data = text[start:end]
+-                    self.column += len(data)
+-                    if bool(self.encoding):
+-                        data = data.encode(self.encoding)
+-                    self.stream.write(data)
+-                    start = end
+-                if ch is not None:
+-                    if ch in self.ESCAPE_REPLACEMENTS:
+-                        data = u'\\' + self.ESCAPE_REPLACEMENTS[ch]
+-                    elif ch <= u'\xFF':
+-                        data = u'\\x%02X' % ord(ch)
+-                    elif ch <= u'\uFFFF':
+-                        data = u'\\u%04X' % ord(ch)
+-                    else:
+-                        data = u'\\U%08X' % ord(ch)
+-                    self.column += len(data)
+-                    if bool(self.encoding):
+-                        data = data.encode(self.encoding)
+-                    self.stream.write(data)
+-                    start = end + 1
+-            if (
+-                0 < end < len(text) - 1
+-                and (ch == u' ' or start >= end)
+-                and self.column + (end - start) > self.best_width
+-                and split
+-            ):
+-                data = text[start:end] + u'\\'
+-                if start < end:
+-                    start = end
+-                self.column += len(data)
+-                if bool(self.encoding):
+-                    data = data.encode(self.encoding)
+-                self.stream.write(data)
+-                self.write_indent()
+-                self.whitespace = False
+-                self.indention = False
+-                if text[start] == u' ':
+-                    data = u'\\'
+-                    self.column += len(data)
+-                    if bool(self.encoding):
+-                        data = data.encode(self.encoding)
+-                    self.stream.write(data)
+-            end += 1
+-        self.write_indicator(u'"', False)
+-
+-    def determine_block_hints(self, text):
+-        # type: (Any) -> Any
+-        indent = 0
+-        indicator = u''
+-        hints = u''
+-        if text:
+-            if text[0] in u' \n\x85\u2028\u2029':
+-                indent = self.best_sequence_indent
+-                hints += text_type(indent)
+-            elif self.root_context:
+-                for end in ['\n---', '\n...']:
+-                    pos = 0
+-                    while True:
+-                        pos = text.find(end, pos)
+-                        if pos == -1:
+-                            break
+-                        try:
+-                            if text[pos + 4] in ' \r\n':
+-                                break
+-                        except IndexError:
+-                            pass
+-                        pos += 1
+-                    if pos > -1:
+-                        break
+-                if pos > 0:
+-                    indent = self.best_sequence_indent
+-            if text[-1] not in u'\n\x85\u2028\u2029':
+-                indicator = u'-'
+-            elif len(text) == 1 or text[-2] in u'\n\x85\u2028\u2029':
+-                indicator = u'+'
+-        hints += indicator
+-        return hints, indent, indicator
+-
+-    def write_folded(self, text):
+-        # type: (Any) -> None
+-        hints, _indent, _indicator = self.determine_block_hints(text)
+-        self.write_indicator(u'>' + hints, True)
+-        if _indicator == u'+':
+-            self.open_ended = True
+-        self.write_line_break()
+-        leading_space = True
+-        spaces = False
+-        breaks = True
+-        start = end = 0
+-        while end <= len(text):
+-            ch = None
+-            if end < len(text):
+-                ch = text[end]
+-            if breaks:
+-                if ch is None or ch not in u'\n\x85\u2028\u2029\a':
+-                    if (
+-                        not leading_space
+-                        and ch is not None
+-                        and ch != u' '
+-                        and text[start] == u'\n'
+-                    ):
+-                        self.write_line_break()
+-                    leading_space = ch == u' '
+-                    for br in text[start:end]:
+-                        if br == u'\n':
+-                            self.write_line_break()
+-                        else:
+-                            self.write_line_break(br)
+-                    if ch is not None:
+-                        self.write_indent()
+-                    start = end
+-            elif spaces:
+-                if ch != u' ':
+-                    if start + 1 == end and self.column > self.best_width:
+-                        self.write_indent()
+-                    else:
+-                        data = text[start:end]
+-                        self.column += len(data)
+-                        if bool(self.encoding):
+-                            data = data.encode(self.encoding)
+-                        self.stream.write(data)
+-                    start = end
+-            else:
+-                if ch is None or ch in u' \n\x85\u2028\u2029\a':
+-                    data = text[start:end]
+-                    self.column += len(data)
+-                    if bool(self.encoding):
+-                        data = data.encode(self.encoding)
+-                    self.stream.write(data)
+-                    if ch == u'\a':
+-                        if end < (len(text) - 1) and not text[end + 2].isspace():
+-                            self.write_line_break()
+-                            self.write_indent()
+-                            end += 2  # \a and the space that is inserted on the fold
+-                        else:
+-                            raise EmitterError('unexcpected fold indicator \\a before space')
+-                    if ch is None:
+-                        self.write_line_break()
+-                    start = end
+-            if ch is not None:
+-                breaks = ch in u'\n\x85\u2028\u2029'
+-                spaces = ch == u' '
+-            end += 1
+-
+-    def write_literal(self, text, comment=None):
+-        # type: (Any, Any) -> None
+-        hints, _indent, _indicator = self.determine_block_hints(text)
+-        self.write_indicator(u'|' + hints, True)
+-        try:
+-            comment = comment[1][0]
+-            if comment:
+-                self.stream.write(comment)
+-        except (TypeError, IndexError):
+-            pass
+-        if _indicator == u'+':
+-            self.open_ended = True
+-        self.write_line_break()
+-        breaks = True
+-        start = end = 0
+-        while end <= len(text):
+-            ch = None
+-            if end < len(text):
+-                ch = text[end]
+-            if breaks:
+-                if ch is None or ch not in u'\n\x85\u2028\u2029':
+-                    for br in text[start:end]:
+-                        if br == u'\n':
+-                            self.write_line_break()
+-                        else:
+-                            self.write_line_break(br)
+-                    if ch is not None:
+-                        if self.root_context:
+-                            idnx = self.indent if self.indent is not None else 0
+-                            self.stream.write(u' ' * (_indent + idnx))
+-                        else:
+-                            self.write_indent()
+-                    start = end
+-            else:
+-                if ch is None or ch in u'\n\x85\u2028\u2029':
+-                    data = text[start:end]
+-                    if bool(self.encoding):
+-                        data = data.encode(self.encoding)
+-                    self.stream.write(data)
+-                    if ch is None:
+-                        self.write_line_break()
+-                    start = end
+-            if ch is not None:
+-                breaks = ch in u'\n\x85\u2028\u2029'
+-            end += 1
+-
+-    def write_plain(self, text, split=True):
+-        # type: (Any, Any) -> None
+-        if self.root_context:
+-            if self.requested_indent is not None:
+-                self.write_line_break()
+-                if self.requested_indent != 0:
+-                    self.write_indent()
+-            else:
+-                self.open_ended = True
+-        if not text:
+-            return
+-        if not self.whitespace:
+-            data = u' '
+-            self.column += len(data)
+-            if self.encoding:
+-                data = data.encode(self.encoding)
+-            self.stream.write(data)
+-        self.whitespace = False
+-        self.indention = False
+-        spaces = False
+-        breaks = False
+-        start = end = 0
+-        while end <= len(text):
+-            ch = None
+-            if end < len(text):
+-                ch = text[end]
+-            if spaces:
+-                if ch != u' ':
+-                    if start + 1 == end and self.column > self.best_width and split:
+-                        self.write_indent()
+-                        self.whitespace = False
+-                        self.indention = False
+-                    else:
+-                        data = text[start:end]
+-                        self.column += len(data)
+-                        if self.encoding:
+-                            data = data.encode(self.encoding)
+-                        self.stream.write(data)
+-                    start = end
+-            elif breaks:
+-                if ch not in u'\n\x85\u2028\u2029':  # type: ignore
+-                    if text[start] == u'\n':
+-                        self.write_line_break()
+-                    for br in text[start:end]:
+-                        if br == u'\n':
+-                            self.write_line_break()
+-                        else:
+-                            self.write_line_break(br)
+-                    self.write_indent()
+-                    self.whitespace = False
+-                    self.indention = False
+-                    start = end
+-            else:
+-                if ch is None or ch in u' \n\x85\u2028\u2029':
+-                    data = text[start:end]
+-                    self.column += len(data)
+-                    if self.encoding:
+-                        data = data.encode(self.encoding)
+-                    try:
+-                        self.stream.write(data)
+-                    except:  # NOQA
+-                        sys.stdout.write(repr(data) + '\n')
+-                        raise
+-                    start = end
+-            if ch is not None:
+-                spaces = ch == u' '
+-                breaks = ch in u'\n\x85\u2028\u2029'
+-            end += 1
+-
+-    def write_comment(self, comment, pre=False):
+-        # type: (Any, bool) -> None
+-        value = comment.value
+-        # nprintf('{:02d} {:02d} {!r}'.format(self.column, comment.start_mark.column, value))
+-        if not pre and value[-1] == '\n':
+-            value = value[:-1]
+-        try:
+-            # get original column position
+-            col = comment.start_mark.column
+-            if comment.value and comment.value.startswith('\n'):
+-                # never inject extra spaces if the comment starts with a newline
+-                # and not a real comment (e.g. if you have an empty line following a key-value
+-                col = self.column
+-            elif col < self.column + 1:
+-                ValueError
+-        except ValueError:
+-            col = self.column + 1
+-        # nprint('post_comment', self.line, self.column, value)
+-        try:
+-            # at least one space if the current column >= the start column of the comment
+-            # but not at the start of a line
+-            nr_spaces = col - self.column
+-            if self.column and value.strip() and nr_spaces < 1 and value[0] != '\n':
+-                nr_spaces = 1
+-            value = ' ' * nr_spaces + value
+-            try:
+-                if bool(self.encoding):
+-                    value = value.encode(self.encoding)
+-            except UnicodeDecodeError:
+-                pass
+-            self.stream.write(value)
+-        except TypeError:
+-            raise
+-        if not pre:
+-            self.write_line_break()
+-
+-    def write_pre_comment(self, event):
+-        # type: (Any) -> bool
+-        comments = event.comment[1]
+-        if comments is None:
+-            return False
+-        try:
+-            start_events = (MappingStartEvent, SequenceStartEvent)
+-            for comment in comments:
+-                if isinstance(event, start_events) and getattr(comment, 'pre_done', None):
+-                    continue
+-                if self.column != 0:
+-                    self.write_line_break()
+-                self.write_comment(comment, pre=True)
+-                if isinstance(event, start_events):
+-                    comment.pre_done = True
+-        except TypeError:
+-            sys.stdout.write('eventtt {} {}'.format(type(event), event))
+-            raise
+-        return True
+-
+-    def write_post_comment(self, event):
+-        # type: (Any) -> bool
+-        if self.event.comment[0] is None:
+-            return False
+-        comment = event.comment[0]
+-        self.write_comment(comment)
+-        return True
+diff --git a/dynaconf/vendor_src/ruamel/yaml/error.py b/dynaconf/vendor_src/ruamel/yaml/error.py
+deleted file mode 100644
+index b034d02..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/error.py
++++ /dev/null
+@@ -1,311 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import
+-
+-import warnings
+-import textwrap
+-
+-from .compat import utf8
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List, Text  # NOQA
+-
+-
+-__all__ = [
+-    'FileMark',
+-    'StringMark',
+-    'CommentMark',
+-    'YAMLError',
+-    'MarkedYAMLError',
+-    'ReusedAnchorWarning',
+-    'UnsafeLoaderWarning',
+-    'MarkedYAMLWarning',
+-    'MarkedYAMLFutureWarning',
+-]
+-
+-
+-class StreamMark(object):
+-    __slots__ = 'name', 'index', 'line', 'column'
+-
+-    def __init__(self, name, index, line, column):
+-        # type: (Any, int, int, int) -> None
+-        self.name = name
+-        self.index = index
+-        self.line = line
+-        self.column = column
+-
+-    def __str__(self):
+-        # type: () -> Any
+-        where = '  in "%s", line %d, column %d' % (self.name, self.line + 1, self.column + 1)
+-        return where
+-
+-    def __eq__(self, other):
+-        # type: (Any) -> bool
+-        if self.line != other.line or self.column != other.column:
+-            return False
+-        if self.name != other.name or self.index != other.index:
+-            return False
+-        return True
+-
+-    def __ne__(self, other):
+-        # type: (Any) -> bool
+-        return not self.__eq__(other)
+-
+-
+-class FileMark(StreamMark):
+-    __slots__ = ()
+-
+-
+-class StringMark(StreamMark):
+-    __slots__ = 'name', 'index', 'line', 'column', 'buffer', 'pointer'
+-
+-    def __init__(self, name, index, line, column, buffer, pointer):
+-        # type: (Any, int, int, int, Any, Any) -> None
+-        StreamMark.__init__(self, name, index, line, column)
+-        self.buffer = buffer
+-        self.pointer = pointer
+-
+-    def get_snippet(self, indent=4, max_length=75):
+-        # type: (int, int) -> Any
+-        if self.buffer is None:  # always False
+-            return None
+-        head = ""
+-        start = self.pointer
+-        while start > 0 and self.buffer[start - 1] not in u'\0\r\n\x85\u2028\u2029':
+-            start -= 1
+-            if self.pointer - start > max_length / 2 - 1:
+-                head = ' ... '
+-                start += 5
+-                break
+-        tail = ""
+-        end = self.pointer
+-        while end < len(self.buffer) and self.buffer[end] not in u'\0\r\n\x85\u2028\u2029':
+-            end += 1
+-            if end - self.pointer > max_length / 2 - 1:
+-                tail = ' ... '
+-                end -= 5
+-                break
+-        snippet = utf8(self.buffer[start:end])
+-        caret = '^'
+-        caret = '^ (line: {})'.format(self.line + 1)
+-        return (
+-            ' ' * indent
+-            + head
+-            + snippet
+-            + tail
+-            + '\n'
+-            + ' ' * (indent + self.pointer - start + len(head))
+-            + caret
+-        )
+-
+-    def __str__(self):
+-        # type: () -> Any
+-        snippet = self.get_snippet()
+-        where = '  in "%s", line %d, column %d' % (self.name, self.line + 1, self.column + 1)
+-        if snippet is not None:
+-            where += ':\n' + snippet
+-        return where
+-
+-
+-class CommentMark(object):
+-    __slots__ = ('column',)
+-
+-    def __init__(self, column):
+-        # type: (Any) -> None
+-        self.column = column
+-
+-
+-class YAMLError(Exception):
+-    pass
+-
+-
+-class MarkedYAMLError(YAMLError):
+-    def __init__(
+-        self,
+-        context=None,
+-        context_mark=None,
+-        problem=None,
+-        problem_mark=None,
+-        note=None,
+-        warn=None,
+-    ):
+-        # type: (Any, Any, Any, Any, Any, Any) -> None
+-        self.context = context
+-        self.context_mark = context_mark
+-        self.problem = problem
+-        self.problem_mark = problem_mark
+-        self.note = note
+-        # warn is ignored
+-
+-    def __str__(self):
+-        # type: () -> Any
+-        lines = []  # type: List[str]
+-        if self.context is not None:
+-            lines.append(self.context)
+-        if self.context_mark is not None and (
+-            self.problem is None
+-            or self.problem_mark is None
+-            or self.context_mark.name != self.problem_mark.name
+-            or self.context_mark.line != self.problem_mark.line
+-            or self.context_mark.column != self.problem_mark.column
+-        ):
+-            lines.append(str(self.context_mark))
+-        if self.problem is not None:
+-            lines.append(self.problem)
+-        if self.problem_mark is not None:
+-            lines.append(str(self.problem_mark))
+-        if self.note is not None and self.note:
+-            note = textwrap.dedent(self.note)
+-            lines.append(note)
+-        return '\n'.join(lines)
+-
+-
+-class YAMLStreamError(Exception):
+-    pass
+-
+-
+-class YAMLWarning(Warning):
+-    pass
+-
+-
+-class MarkedYAMLWarning(YAMLWarning):
+-    def __init__(
+-        self,
+-        context=None,
+-        context_mark=None,
+-        problem=None,
+-        problem_mark=None,
+-        note=None,
+-        warn=None,
+-    ):
+-        # type: (Any, Any, Any, Any, Any, Any) -> None
+-        self.context = context
+-        self.context_mark = context_mark
+-        self.problem = problem
+-        self.problem_mark = problem_mark
+-        self.note = note
+-        self.warn = warn
+-
+-    def __str__(self):
+-        # type: () -> Any
+-        lines = []  # type: List[str]
+-        if self.context is not None:
+-            lines.append(self.context)
+-        if self.context_mark is not None and (
+-            self.problem is None
+-            or self.problem_mark is None
+-            or self.context_mark.name != self.problem_mark.name
+-            or self.context_mark.line != self.problem_mark.line
+-            or self.context_mark.column != self.problem_mark.column
+-        ):
+-            lines.append(str(self.context_mark))
+-        if self.problem is not None:
+-            lines.append(self.problem)
+-        if self.problem_mark is not None:
+-            lines.append(str(self.problem_mark))
+-        if self.note is not None and self.note:
+-            note = textwrap.dedent(self.note)
+-            lines.append(note)
+-        if self.warn is not None and self.warn:
+-            warn = textwrap.dedent(self.warn)
+-            lines.append(warn)
+-        return '\n'.join(lines)
+-
+-
+-class ReusedAnchorWarning(YAMLWarning):
+-    pass
+-
+-
+-class UnsafeLoaderWarning(YAMLWarning):
+-    text = """
+-The default 'Loader' for 'load(stream)' without further arguments can be unsafe.
+-Use 'load(stream, Loader=ruamel.yaml.Loader)' explicitly if that is OK.
+-Alternatively include the following in your code:
+-
+-  import warnings
+-  warnings.simplefilter('ignore', ruamel.yaml.error.UnsafeLoaderWarning)
+-
+-In most other cases you should consider using 'safe_load(stream)'"""
+-    pass
+-
+-
+-warnings.simplefilter('once', UnsafeLoaderWarning)
+-
+-
+-class MantissaNoDotYAML1_1Warning(YAMLWarning):
+-    def __init__(self, node, flt_str):
+-        # type: (Any, Any) -> None
+-        self.node = node
+-        self.flt = flt_str
+-
+-    def __str__(self):
+-        # type: () -> Any
+-        line = self.node.start_mark.line
+-        col = self.node.start_mark.column
+-        return """
+-In YAML 1.1 floating point values should have a dot ('.') in their mantissa.
+-See the Floating-Point Language-Independent Type for YAML™ Version 1.1 specification
+-( http://yaml.org/type/float.html ). This dot is not required for JSON nor for YAML 1.2
+-
+-Correct your float: "{}" on line: {}, column: {}
+-
+-or alternatively include the following in your code:
+-
+-  import warnings
+-  warnings.simplefilter('ignore', ruamel.yaml.error.MantissaNoDotYAML1_1Warning)
+-
+-""".format(
+-            self.flt, line, col
+-        )
+-
+-
+-warnings.simplefilter('once', MantissaNoDotYAML1_1Warning)
+-
+-
+-class YAMLFutureWarning(Warning):
+-    pass
+-
+-
+-class MarkedYAMLFutureWarning(YAMLFutureWarning):
+-    def __init__(
+-        self,
+-        context=None,
+-        context_mark=None,
+-        problem=None,
+-        problem_mark=None,
+-        note=None,
+-        warn=None,
+-    ):
+-        # type: (Any, Any, Any, Any, Any, Any) -> None
+-        self.context = context
+-        self.context_mark = context_mark
+-        self.problem = problem
+-        self.problem_mark = problem_mark
+-        self.note = note
+-        self.warn = warn
+-
+-    def __str__(self):
+-        # type: () -> Any
+-        lines = []  # type: List[str]
+-        if self.context is not None:
+-            lines.append(self.context)
+-
+-        if self.context_mark is not None and (
+-            self.problem is None
+-            or self.problem_mark is None
+-            or self.context_mark.name != self.problem_mark.name
+-            or self.context_mark.line != self.problem_mark.line
+-            or self.context_mark.column != self.problem_mark.column
+-        ):
+-            lines.append(str(self.context_mark))
+-        if self.problem is not None:
+-            lines.append(self.problem)
+-        if self.problem_mark is not None:
+-            lines.append(str(self.problem_mark))
+-        if self.note is not None and self.note:
+-            note = textwrap.dedent(self.note)
+-            lines.append(note)
+-        if self.warn is not None and self.warn:
+-            warn = textwrap.dedent(self.warn)
+-            lines.append(warn)
+-        return '\n'.join(lines)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/events.py b/dynaconf/vendor_src/ruamel/yaml/events.py
+deleted file mode 100644
+index 58b2121..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/events.py
++++ /dev/null
+@@ -1,157 +0,0 @@
+-# coding: utf-8
+-
+-# Abstract classes.
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List  # NOQA
+-
+-
+-def CommentCheck():
+-    # type: () -> None
+-    pass
+-
+-
+-class Event(object):
+-    __slots__ = 'start_mark', 'end_mark', 'comment'
+-
+-    def __init__(self, start_mark=None, end_mark=None, comment=CommentCheck):
+-        # type: (Any, Any, Any) -> None
+-        self.start_mark = start_mark
+-        self.end_mark = end_mark
+-        # assert comment is not CommentCheck
+-        if comment is CommentCheck:
+-            comment = None
+-        self.comment = comment
+-
+-    def __repr__(self):
+-        # type: () -> Any
+-        attributes = [
+-            key
+-            for key in ['anchor', 'tag', 'implicit', 'value', 'flow_style', 'style']
+-            if hasattr(self, key)
+-        ]
+-        arguments = ', '.join(['%s=%r' % (key, getattr(self, key)) for key in attributes])
+-        if self.comment not in [None, CommentCheck]:
+-            arguments += ', comment={!r}'.format(self.comment)
+-        return '%s(%s)' % (self.__class__.__name__, arguments)
+-
+-
+-class NodeEvent(Event):
+-    __slots__ = ('anchor',)
+-
+-    def __init__(self, anchor, start_mark=None, end_mark=None, comment=None):
+-        # type: (Any, Any, Any, Any) -> None
+-        Event.__init__(self, start_mark, end_mark, comment)
+-        self.anchor = anchor
+-
+-
+-class CollectionStartEvent(NodeEvent):
+-    __slots__ = 'tag', 'implicit', 'flow_style', 'nr_items'
+-
+-    def __init__(
+-        self,
+-        anchor,
+-        tag,
+-        implicit,
+-        start_mark=None,
+-        end_mark=None,
+-        flow_style=None,
+-        comment=None,
+-        nr_items=None,
+-    ):
+-        # type: (Any, Any, Any, Any, Any, Any, Any, Optional[int]) -> None
+-        NodeEvent.__init__(self, anchor, start_mark, end_mark, comment)
+-        self.tag = tag
+-        self.implicit = implicit
+-        self.flow_style = flow_style
+-        self.nr_items = nr_items
+-
+-
+-class CollectionEndEvent(Event):
+-    __slots__ = ()
+-
+-
+-# Implementations.
+-
+-
+-class StreamStartEvent(Event):
+-    __slots__ = ('encoding',)
+-
+-    def __init__(self, start_mark=None, end_mark=None, encoding=None, comment=None):
+-        # type: (Any, Any, Any, Any) -> None
+-        Event.__init__(self, start_mark, end_mark, comment)
+-        self.encoding = encoding
+-
+-
+-class StreamEndEvent(Event):
+-    __slots__ = ()
+-
+-
+-class DocumentStartEvent(Event):
+-    __slots__ = 'explicit', 'version', 'tags'
+-
+-    def __init__(
+-        self,
+-        start_mark=None,
+-        end_mark=None,
+-        explicit=None,
+-        version=None,
+-        tags=None,
+-        comment=None,
+-    ):
+-        # type: (Any, Any, Any, Any, Any, Any) -> None
+-        Event.__init__(self, start_mark, end_mark, comment)
+-        self.explicit = explicit
+-        self.version = version
+-        self.tags = tags
+-
+-
+-class DocumentEndEvent(Event):
+-    __slots__ = ('explicit',)
+-
+-    def __init__(self, start_mark=None, end_mark=None, explicit=None, comment=None):
+-        # type: (Any, Any, Any, Any) -> None
+-        Event.__init__(self, start_mark, end_mark, comment)
+-        self.explicit = explicit
+-
+-
+-class AliasEvent(NodeEvent):
+-    __slots__ = ()
+-
+-
+-class ScalarEvent(NodeEvent):
+-    __slots__ = 'tag', 'implicit', 'value', 'style'
+-
+-    def __init__(
+-        self,
+-        anchor,
+-        tag,
+-        implicit,
+-        value,
+-        start_mark=None,
+-        end_mark=None,
+-        style=None,
+-        comment=None,
+-    ):
+-        # type: (Any, Any, Any, Any, Any, Any, Any, Any) -> None
+-        NodeEvent.__init__(self, anchor, start_mark, end_mark, comment)
+-        self.tag = tag
+-        self.implicit = implicit
+-        self.value = value
+-        self.style = style
+-
+-
+-class SequenceStartEvent(CollectionStartEvent):
+-    __slots__ = ()
+-
+-
+-class SequenceEndEvent(CollectionEndEvent):
+-    __slots__ = ()
+-
+-
+-class MappingStartEvent(CollectionStartEvent):
+-    __slots__ = ()
+-
+-
+-class MappingEndEvent(CollectionEndEvent):
+-    __slots__ = ()
+diff --git a/dynaconf/vendor_src/ruamel/yaml/loader.py b/dynaconf/vendor_src/ruamel/yaml/loader.py
+deleted file mode 100644
+index 53dd576..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/loader.py
++++ /dev/null
+@@ -1,74 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import
+-
+-
+-from .reader import Reader
+-from .scanner import Scanner, RoundTripScanner
+-from .parser import Parser, RoundTripParser
+-from .composer import Composer
+-from .constructor import (
+-    BaseConstructor,
+-    SafeConstructor,
+-    Constructor,
+-    RoundTripConstructor,
+-)
+-from .resolver import VersionedResolver
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, List, Union, Optional  # NOQA
+-    from .compat import StreamTextType, VersionType  # NOQA
+-
+-__all__ = ['BaseLoader', 'SafeLoader', 'Loader', 'RoundTripLoader']
+-
+-
+-class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, VersionedResolver):
+-    def __init__(self, stream, version=None, preserve_quotes=None):
+-        # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
+-        Reader.__init__(self, stream, loader=self)
+-        Scanner.__init__(self, loader=self)
+-        Parser.__init__(self, loader=self)
+-        Composer.__init__(self, loader=self)
+-        BaseConstructor.__init__(self, loader=self)
+-        VersionedResolver.__init__(self, version, loader=self)
+-
+-
+-class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, VersionedResolver):
+-    def __init__(self, stream, version=None, preserve_quotes=None):
+-        # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
+-        Reader.__init__(self, stream, loader=self)
+-        Scanner.__init__(self, loader=self)
+-        Parser.__init__(self, loader=self)
+-        Composer.__init__(self, loader=self)
+-        SafeConstructor.__init__(self, loader=self)
+-        VersionedResolver.__init__(self, version, loader=self)
+-
+-
+-class Loader(Reader, Scanner, Parser, Composer, Constructor, VersionedResolver):
+-    def __init__(self, stream, version=None, preserve_quotes=None):
+-        # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
+-        Reader.__init__(self, stream, loader=self)
+-        Scanner.__init__(self, loader=self)
+-        Parser.__init__(self, loader=self)
+-        Composer.__init__(self, loader=self)
+-        Constructor.__init__(self, loader=self)
+-        VersionedResolver.__init__(self, version, loader=self)
+-
+-
+-class RoundTripLoader(
+-    Reader,
+-    RoundTripScanner,
+-    RoundTripParser,
+-    Composer,
+-    RoundTripConstructor,
+-    VersionedResolver,
+-):
+-    def __init__(self, stream, version=None, preserve_quotes=None):
+-        # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
+-        # self.reader = Reader.__init__(self, stream)
+-        Reader.__init__(self, stream, loader=self)
+-        RoundTripScanner.__init__(self, loader=self)
+-        RoundTripParser.__init__(self, loader=self)
+-        Composer.__init__(self, loader=self)
+-        RoundTripConstructor.__init__(self, preserve_quotes=preserve_quotes, loader=self)
+-        VersionedResolver.__init__(self, version, loader=self)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/main.py b/dynaconf/vendor_src/ruamel/yaml/main.py
+deleted file mode 100644
+index 7023331..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/main.py
++++ /dev/null
+@@ -1,1534 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import, unicode_literals, print_function
+-
+-import sys
+-import os
+-import warnings
+-import glob
+-from importlib import import_module
+-
+-
+-import dynaconf.vendor.ruamel as ruamel
+-from .error import UnsafeLoaderWarning, YAMLError  # NOQA
+-
+-from .tokens import *  # NOQA
+-from .events import *  # NOQA
+-from .nodes import *  # NOQA
+-
+-from .loader import BaseLoader, SafeLoader, Loader, RoundTripLoader  # NOQA
+-from .dumper import BaseDumper, SafeDumper, Dumper, RoundTripDumper  # NOQA
+-from .compat import StringIO, BytesIO, with_metaclass, PY3, nprint
+-from .resolver import VersionedResolver, Resolver  # NOQA
+-from .representer import (
+-    BaseRepresenter,
+-    SafeRepresenter,
+-    Representer,
+-    RoundTripRepresenter,
+-)
+-from .constructor import (
+-    BaseConstructor,
+-    SafeConstructor,
+-    Constructor,
+-    RoundTripConstructor,
+-)
+-from .loader import Loader as UnsafeLoader
+-
+-if False:  # MYPY
+-    from typing import List, Set, Dict, Union, Any, Callable, Optional, Text  # NOQA
+-    from .compat import StreamType, StreamTextType, VersionType  # NOQA
+-
+-    if PY3:
+-        from pathlib import Path
+-    else:
+-        Path = Any
+-
+-try:
+-    from _ruamel_yaml import CParser, CEmitter  # type: ignore
+-except:  # NOQA
+-    CParser = CEmitter = None
+-
+-# import io
+-
+-enforce = object()
+-
+-
+-# YAML is an acronym, i.e. spoken: rhymes with "camel". And thus a
+-# subset of abbreviations, which should be all caps according to PEP8
+-
+-
+-class YAML(object):
+-    def __init__(
+-        self, _kw=enforce, typ=None, pure=False, output=None, plug_ins=None  # input=None,
+-    ):
+-        # type: (Any, Optional[Text], Any, Any, Any) -> None
+-        """
+-        _kw: not used, forces keyword arguments in 2.7 (in 3 you can do (*, safe_load=..)
+-        typ: 'rt'/None -> RoundTripLoader/RoundTripDumper,  (default)
+-             'safe'    -> SafeLoader/SafeDumper,
+-             'unsafe'  -> normal/unsafe Loader/Dumper
+-             'base'    -> baseloader
+-        pure: if True only use Python modules
+-        input/output: needed to work as context manager
+-        plug_ins: a list of plug-in files
+-        """
+-        if _kw is not enforce:
+-            raise TypeError(
+-                '{}.__init__() takes no positional argument but at least '
+-                'one was given ({!r})'.format(self.__class__.__name__, _kw)
+-            )
+-
+-        self.typ = ['rt'] if typ is None else (typ if isinstance(typ, list) else [typ])
+-        self.pure = pure
+-
+-        # self._input = input
+-        self._output = output
+-        self._context_manager = None  # type: Any
+-
+-        self.plug_ins = []  # type: List[Any]
+-        for pu in ([] if plug_ins is None else plug_ins) + self.official_plug_ins():
+-            file_name = pu.replace(os.sep, '.')
+-            self.plug_ins.append(import_module(file_name))
+-        self.Resolver = ruamel.yaml.resolver.VersionedResolver  # type: Any
+-        self.allow_unicode = True
+-        self.Reader = None  # type: Any
+-        self.Representer = None  # type: Any
+-        self.Constructor = None  # type: Any
+-        self.Scanner = None  # type: Any
+-        self.Serializer = None  # type: Any
+-        self.default_flow_style = None  # type: Any
+-        typ_found = 1
+-        setup_rt = False
+-        if 'rt' in self.typ:
+-            setup_rt = True
+-        elif 'safe' in self.typ:
+-            self.Emitter = (
+-                ruamel.yaml.emitter.Emitter if pure or CEmitter is None else CEmitter
+-            )
+-            self.Representer = ruamel.yaml.representer.SafeRepresenter
+-            self.Parser = ruamel.yaml.parser.Parser if pure or CParser is None else CParser
+-            self.Composer = ruamel.yaml.composer.Composer
+-            self.Constructor = ruamel.yaml.constructor.SafeConstructor
+-        elif 'base' in self.typ:
+-            self.Emitter = ruamel.yaml.emitter.Emitter
+-            self.Representer = ruamel.yaml.representer.BaseRepresenter
+-            self.Parser = ruamel.yaml.parser.Parser if pure or CParser is None else CParser
+-            self.Composer = ruamel.yaml.composer.Composer
+-            self.Constructor = ruamel.yaml.constructor.BaseConstructor
+-        elif 'unsafe' in self.typ:
+-            self.Emitter = (
+-                ruamel.yaml.emitter.Emitter if pure or CEmitter is None else CEmitter
+-            )
+-            self.Representer = ruamel.yaml.representer.Representer
+-            self.Parser = ruamel.yaml.parser.Parser if pure or CParser is None else CParser
+-            self.Composer = ruamel.yaml.composer.Composer
+-            self.Constructor = ruamel.yaml.constructor.Constructor
+-        else:
+-            setup_rt = True
+-            typ_found = 0
+-        if setup_rt:
+-            self.default_flow_style = False
+-            # no optimized rt-dumper yet
+-            self.Emitter = ruamel.yaml.emitter.Emitter
+-            self.Serializer = ruamel.yaml.serializer.Serializer
+-            self.Representer = ruamel.yaml.representer.RoundTripRepresenter
+-            self.Scanner = ruamel.yaml.scanner.RoundTripScanner
+-            # no optimized rt-parser yet
+-            self.Parser = ruamel.yaml.parser.RoundTripParser
+-            self.Composer = ruamel.yaml.composer.Composer
+-            self.Constructor = ruamel.yaml.constructor.RoundTripConstructor
+-        del setup_rt
+-        self.stream = None
+-        self.canonical = None
+-        self.old_indent = None
+-        self.width = None
+-        self.line_break = None
+-
+-        self.map_indent = None
+-        self.sequence_indent = None
+-        self.sequence_dash_offset = 0
+-        self.compact_seq_seq = None
+-        self.compact_seq_map = None
+-        self.sort_base_mapping_type_on_output = None  # default: sort
+-
+-        self.top_level_colon_align = None
+-        self.prefix_colon = None
+-        self.version = None
+-        self.preserve_quotes = None
+-        self.allow_duplicate_keys = False  # duplicate keys in map, set
+-        self.encoding = 'utf-8'
+-        self.explicit_start = None
+-        self.explicit_end = None
+-        self.tags = None
+-        self.default_style = None
+-        self.top_level_block_style_scalar_no_indent_error_1_1 = False
+-        # directives end indicator with single scalar document
+-        self.scalar_after_indicator = None
+-        # [a, b: 1, c: {d: 2}]  vs. [a, {b: 1}, {c: {d: 2}}]
+-        self.brace_single_entry_mapping_in_flow_sequence = False
+-        for module in self.plug_ins:
+-            if getattr(module, 'typ', None) in self.typ:
+-                typ_found += 1
+-                module.init_typ(self)
+-                break
+-        if typ_found == 0:
+-            raise NotImplementedError(
+-                'typ "{}"not recognised (need to install plug-in?)'.format(self.typ)
+-            )
+-
+-    @property
+-    def reader(self):
+-        # type: () -> Any
+-        try:
+-            return self._reader  # type: ignore
+-        except AttributeError:
+-            self._reader = self.Reader(None, loader=self)
+-            return self._reader
+-
+-    @property
+-    def scanner(self):
+-        # type: () -> Any
+-        try:
+-            return self._scanner  # type: ignore
+-        except AttributeError:
+-            self._scanner = self.Scanner(loader=self)
+-            return self._scanner
+-
+-    @property
+-    def parser(self):
+-        # type: () -> Any
+-        attr = '_' + sys._getframe().f_code.co_name
+-        if not hasattr(self, attr):
+-            if self.Parser is not CParser:
+-                setattr(self, attr, self.Parser(loader=self))
+-            else:
+-                if getattr(self, '_stream', None) is None:
+-                    # wait for the stream
+-                    return None
+-                else:
+-                    # if not hasattr(self._stream, 'read') and hasattr(self._stream, 'open'):
+-                    #     # pathlib.Path() instance
+-                    #     setattr(self, attr, CParser(self._stream))
+-                    # else:
+-                    setattr(self, attr, CParser(self._stream))
+-                    # self._parser = self._composer = self
+-                    # nprint('scanner', self.loader.scanner)
+-
+-        return getattr(self, attr)
+-
+-    @property
+-    def composer(self):
+-        # type: () -> Any
+-        attr = '_' + sys._getframe().f_code.co_name
+-        if not hasattr(self, attr):
+-            setattr(self, attr, self.Composer(loader=self))
+-        return getattr(self, attr)
+-
+-    @property
+-    def constructor(self):
+-        # type: () -> Any
+-        attr = '_' + sys._getframe().f_code.co_name
+-        if not hasattr(self, attr):
+-            cnst = self.Constructor(preserve_quotes=self.preserve_quotes, loader=self)
+-            cnst.allow_duplicate_keys = self.allow_duplicate_keys
+-            setattr(self, attr, cnst)
+-        return getattr(self, attr)
+-
+-    @property
+-    def resolver(self):
+-        # type: () -> Any
+-        attr = '_' + sys._getframe().f_code.co_name
+-        if not hasattr(self, attr):
+-            setattr(self, attr, self.Resolver(version=self.version, loader=self))
+-        return getattr(self, attr)
+-
+-    @property
+-    def emitter(self):
+-        # type: () -> Any
+-        attr = '_' + sys._getframe().f_code.co_name
+-        if not hasattr(self, attr):
+-            if self.Emitter is not CEmitter:
+-                _emitter = self.Emitter(
+-                    None,
+-                    canonical=self.canonical,
+-                    indent=self.old_indent,
+-                    width=self.width,
+-                    allow_unicode=self.allow_unicode,
+-                    line_break=self.line_break,
+-                    prefix_colon=self.prefix_colon,
+-                    brace_single_entry_mapping_in_flow_sequence=self.brace_single_entry_mapping_in_flow_sequence,  # NOQA
+-                    dumper=self,
+-                )
+-                setattr(self, attr, _emitter)
+-                if self.map_indent is not None:
+-                    _emitter.best_map_indent = self.map_indent
+-                if self.sequence_indent is not None:
+-                    _emitter.best_sequence_indent = self.sequence_indent
+-                if self.sequence_dash_offset is not None:
+-                    _emitter.sequence_dash_offset = self.sequence_dash_offset
+-                    # _emitter.block_seq_indent = self.sequence_dash_offset
+-                if self.compact_seq_seq is not None:
+-                    _emitter.compact_seq_seq = self.compact_seq_seq
+-                if self.compact_seq_map is not None:
+-                    _emitter.compact_seq_map = self.compact_seq_map
+-            else:
+-                if getattr(self, '_stream', None) is None:
+-                    # wait for the stream
+-                    return None
+-                return None
+-        return getattr(self, attr)
+-
+-    @property
+-    def serializer(self):
+-        # type: () -> Any
+-        attr = '_' + sys._getframe().f_code.co_name
+-        if not hasattr(self, attr):
+-            setattr(
+-                self,
+-                attr,
+-                self.Serializer(
+-                    encoding=self.encoding,
+-                    explicit_start=self.explicit_start,
+-                    explicit_end=self.explicit_end,
+-                    version=self.version,
+-                    tags=self.tags,
+-                    dumper=self,
+-                ),
+-            )
+-        return getattr(self, attr)
+-
+-    @property
+-    def representer(self):
+-        # type: () -> Any
+-        attr = '_' + sys._getframe().f_code.co_name
+-        if not hasattr(self, attr):
+-            repres = self.Representer(
+-                default_style=self.default_style,
+-                default_flow_style=self.default_flow_style,
+-                dumper=self,
+-            )
+-            if self.sort_base_mapping_type_on_output is not None:
+-                repres.sort_base_mapping_type_on_output = self.sort_base_mapping_type_on_output
+-            setattr(self, attr, repres)
+-        return getattr(self, attr)
+-
+-    # separate output resolver?
+-
+-    # def load(self, stream=None):
+-    #     if self._context_manager:
+-    #        if not self._input:
+-    #             raise TypeError("Missing input stream while dumping from context manager")
+-    #         for data in self._context_manager.load():
+-    #             yield data
+-    #         return
+-    #     if stream is None:
+-    #         raise TypeError("Need a stream argument when not loading from context manager")
+-    #     return self.load_one(stream)
+-
+-    def load(self, stream):
+-        # type: (Union[Path, StreamTextType]) -> Any
+-        """
+-        at this point you either have the non-pure Parser (which has its own reader and
+-        scanner) or you have the pure Parser.
+-        If the pure Parser is set, then set the Reader and Scanner, if not already set.
+-        If either the Scanner or Reader are set, you cannot use the non-pure Parser,
+-            so reset it to the pure parser and set the Reader resp. Scanner if necessary
+-        """
+-        if not hasattr(stream, 'read') and hasattr(stream, 'open'):
+-            # pathlib.Path() instance
+-            with stream.open('rb') as fp:
+-                return self.load(fp)
+-        constructor, parser = self.get_constructor_parser(stream)
+-        try:
+-            return constructor.get_single_data()
+-        finally:
+-            parser.dispose()
+-            try:
+-                self._reader.reset_reader()
+-            except AttributeError:
+-                pass
+-            try:
+-                self._scanner.reset_scanner()
+-            except AttributeError:
+-                pass
+-
+-    def load_all(self, stream, _kw=enforce):  # , skip=None):
+-        # type: (Union[Path, StreamTextType], Any) -> Any
+-        if _kw is not enforce:
+-            raise TypeError(
+-                '{}.__init__() takes no positional argument but at least '
+-                'one was given ({!r})'.format(self.__class__.__name__, _kw)
+-            )
+-        if not hasattr(stream, 'read') and hasattr(stream, 'open'):
+-            # pathlib.Path() instance
+-            with stream.open('r') as fp:
+-                for d in self.load_all(fp, _kw=enforce):
+-                    yield d
+-                return
+-        # if skip is None:
+-        #     skip = []
+-        # elif isinstance(skip, int):
+-        #     skip = [skip]
+-        constructor, parser = self.get_constructor_parser(stream)
+-        try:
+-            while constructor.check_data():
+-                yield constructor.get_data()
+-        finally:
+-            parser.dispose()
+-            try:
+-                self._reader.reset_reader()
+-            except AttributeError:
+-                pass
+-            try:
+-                self._scanner.reset_scanner()
+-            except AttributeError:
+-                pass
+-
+-    def get_constructor_parser(self, stream):
+-        # type: (StreamTextType) -> Any
+-        """
+-        the old cyaml needs special setup, and therefore the stream
+-        """
+-        if self.Parser is not CParser:
+-            if self.Reader is None:
+-                self.Reader = ruamel.yaml.reader.Reader
+-            if self.Scanner is None:
+-                self.Scanner = ruamel.yaml.scanner.Scanner
+-            self.reader.stream = stream
+-        else:
+-            if self.Reader is not None:
+-                if self.Scanner is None:
+-                    self.Scanner = ruamel.yaml.scanner.Scanner
+-                self.Parser = ruamel.yaml.parser.Parser
+-                self.reader.stream = stream
+-            elif self.Scanner is not None:
+-                if self.Reader is None:
+-                    self.Reader = ruamel.yaml.reader.Reader
+-                self.Parser = ruamel.yaml.parser.Parser
+-                self.reader.stream = stream
+-            else:
+-                # combined C level reader>scanner>parser
+-                # does some calls to the resolver, e.g. BaseResolver.descend_resolver
+-                # if you just initialise the CParser, to much of resolver.py
+-                # is actually used
+-                rslvr = self.Resolver
+-                # if rslvr is ruamel.yaml.resolver.VersionedResolver:
+-                #     rslvr = ruamel.yaml.resolver.Resolver
+-
+-                class XLoader(self.Parser, self.Constructor, rslvr):  # type: ignore
+-                    def __init__(selfx, stream, version=self.version, preserve_quotes=None):
+-                        # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None  # NOQA
+-                        CParser.__init__(selfx, stream)
+-                        selfx._parser = selfx._composer = selfx
+-                        self.Constructor.__init__(selfx, loader=selfx)
+-                        selfx.allow_duplicate_keys = self.allow_duplicate_keys
+-                        rslvr.__init__(selfx, version=version, loadumper=selfx)
+-
+-                self._stream = stream
+-                loader = XLoader(stream)
+-                return loader, loader
+-        return self.constructor, self.parser
+-
+-    def dump(self, data, stream=None, _kw=enforce, transform=None):
+-        # type: (Any, Union[Path, StreamType], Any, Any) -> Any
+-        if self._context_manager:
+-            if not self._output:
+-                raise TypeError('Missing output stream while dumping from context manager')
+-            if _kw is not enforce:
+-                raise TypeError(
+-                    '{}.dump() takes one positional argument but at least '
+-                    'two were given ({!r})'.format(self.__class__.__name__, _kw)
+-                )
+-            if transform is not None:
+-                raise TypeError(
+-                    '{}.dump() in the context manager cannot have transform keyword '
+-                    ''.format(self.__class__.__name__)
+-                )
+-            self._context_manager.dump(data)
+-        else:  # old style
+-            if stream is None:
+-                raise TypeError('Need a stream argument when not dumping from context manager')
+-            return self.dump_all([data], stream, _kw, transform=transform)
+-
+-    def dump_all(self, documents, stream, _kw=enforce, transform=None):
+-        # type: (Any, Union[Path, StreamType], Any, Any) -> Any
+-        if self._context_manager:
+-            raise NotImplementedError
+-        if _kw is not enforce:
+-            raise TypeError(
+-                '{}.dump(_all) takes two positional argument but at least '
+-                'three were given ({!r})'.format(self.__class__.__name__, _kw)
+-            )
+-        self._output = stream
+-        self._context_manager = YAMLContextManager(self, transform=transform)
+-        for data in documents:
+-            self._context_manager.dump(data)
+-        self._context_manager.teardown_output()
+-        self._output = None
+-        self._context_manager = None
+-
+-    def Xdump_all(self, documents, stream, _kw=enforce, transform=None):
+-        # type: (Any, Union[Path, StreamType], Any, Any) -> Any
+-        """
+-        Serialize a sequence of Python objects into a YAML stream.
+-        """
+-        if not hasattr(stream, 'write') and hasattr(stream, 'open'):
+-            # pathlib.Path() instance
+-            with stream.open('w') as fp:
+-                return self.dump_all(documents, fp, _kw, transform=transform)
+-        if _kw is not enforce:
+-            raise TypeError(
+-                '{}.dump(_all) takes two positional argument but at least '
+-                'three were given ({!r})'.format(self.__class__.__name__, _kw)
+-            )
+-        # The stream should have the methods `write` and possibly `flush`.
+-        if self.top_level_colon_align is True:
+-            tlca = max([len(str(x)) for x in documents[0]])  # type: Any
+-        else:
+-            tlca = self.top_level_colon_align
+-        if transform is not None:
+-            fstream = stream
+-            if self.encoding is None:
+-                stream = StringIO()
+-            else:
+-                stream = BytesIO()
+-        serializer, representer, emitter = self.get_serializer_representer_emitter(
+-            stream, tlca
+-        )
+-        try:
+-            self.serializer.open()
+-            for data in documents:
+-                try:
+-                    self.representer.represent(data)
+-                except AttributeError:
+-                    # nprint(dir(dumper._representer))
+-                    raise
+-            self.serializer.close()
+-        finally:
+-            try:
+-                self.emitter.dispose()
+-            except AttributeError:
+-                raise
+-                # self.dumper.dispose()  # cyaml
+-            delattr(self, '_serializer')
+-            delattr(self, '_emitter')
+-        if transform:
+-            val = stream.getvalue()
+-            if self.encoding:
+-                val = val.decode(self.encoding)
+-            if fstream is None:
+-                transform(val)
+-            else:
+-                fstream.write(transform(val))
+-        return None
+-
+-    def get_serializer_representer_emitter(self, stream, tlca):
+-        # type: (StreamType, Any) -> Any
+-        # we have only .Serializer to deal with (vs .Reader & .Scanner), much simpler
+-        if self.Emitter is not CEmitter:
+-            if self.Serializer is None:
+-                self.Serializer = ruamel.yaml.serializer.Serializer
+-            self.emitter.stream = stream
+-            self.emitter.top_level_colon_align = tlca
+-            if self.scalar_after_indicator is not None:
+-                self.emitter.scalar_after_indicator = self.scalar_after_indicator
+-            return self.serializer, self.representer, self.emitter
+-        if self.Serializer is not None:
+-            # cannot set serializer with CEmitter
+-            self.Emitter = ruamel.yaml.emitter.Emitter
+-            self.emitter.stream = stream
+-            self.emitter.top_level_colon_align = tlca
+-            if self.scalar_after_indicator is not None:
+-                self.emitter.scalar_after_indicator = self.scalar_after_indicator
+-            return self.serializer, self.representer, self.emitter
+-        # C routines
+-
+-        rslvr = (
+-            ruamel.yaml.resolver.BaseResolver
+-            if 'base' in self.typ
+-            else ruamel.yaml.resolver.Resolver
+-        )
+-
+-        class XDumper(CEmitter, self.Representer, rslvr):  # type: ignore
+-            def __init__(
+-                selfx,
+-                stream,
+-                default_style=None,
+-                default_flow_style=None,
+-                canonical=None,
+-                indent=None,
+-                width=None,
+-                allow_unicode=None,
+-                line_break=None,
+-                encoding=None,
+-                explicit_start=None,
+-                explicit_end=None,
+-                version=None,
+-                tags=None,
+-                block_seq_indent=None,
+-                top_level_colon_align=None,
+-                prefix_colon=None,
+-            ):
+-                # type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None   # NOQA
+-                CEmitter.__init__(
+-                    selfx,
+-                    stream,
+-                    canonical=canonical,
+-                    indent=indent,
+-                    width=width,
+-                    encoding=encoding,
+-                    allow_unicode=allow_unicode,
+-                    line_break=line_break,
+-                    explicit_start=explicit_start,
+-                    explicit_end=explicit_end,
+-                    version=version,
+-                    tags=tags,
+-                )
+-                selfx._emitter = selfx._serializer = selfx._representer = selfx
+-                self.Representer.__init__(
+-                    selfx, default_style=default_style, default_flow_style=default_flow_style
+-                )
+-                rslvr.__init__(selfx)
+-
+-        self._stream = stream
+-        dumper = XDumper(
+-            stream,
+-            default_style=self.default_style,
+-            default_flow_style=self.default_flow_style,
+-            canonical=self.canonical,
+-            indent=self.old_indent,
+-            width=self.width,
+-            allow_unicode=self.allow_unicode,
+-            line_break=self.line_break,
+-            explicit_start=self.explicit_start,
+-            explicit_end=self.explicit_end,
+-            version=self.version,
+-            tags=self.tags,
+-        )
+-        self._emitter = self._serializer = dumper
+-        return dumper, dumper, dumper
+-
+-    # basic types
+-    def map(self, **kw):
+-        # type: (Any) -> Any
+-        if 'rt' in self.typ:
+-            from dynaconf.vendor.ruamel.yaml.comments import CommentedMap
+-
+-            return CommentedMap(**kw)
+-        else:
+-            return dict(**kw)
+-
+-    def seq(self, *args):
+-        # type: (Any) -> Any
+-        if 'rt' in self.typ:
+-            from dynaconf.vendor.ruamel.yaml.comments import CommentedSeq
+-
+-            return CommentedSeq(*args)
+-        else:
+-            return list(*args)
+-
+-    # helpers
+-    def official_plug_ins(self):
+-        # type: () -> Any
+-        bd = os.path.dirname(__file__)
+-        gpbd = os.path.dirname(os.path.dirname(bd))
+-        res = [x.replace(gpbd, "")[1:-3] for x in glob.glob(bd + '/*/__plug_in__.py')]
+-        return res
+-
+-    def register_class(self, cls):
+-        # type:(Any) -> Any
+-        """
+-        register a class for dumping loading
+-        - if it has attribute yaml_tag use that to register, else use class name
+-        - if it has methods to_yaml/from_yaml use those to dump/load else dump attributes
+-          as mapping
+-        """
+-        tag = getattr(cls, 'yaml_tag', '!' + cls.__name__)
+-        try:
+-            self.representer.add_representer(cls, cls.to_yaml)
+-        except AttributeError:
+-
+-            def t_y(representer, data):
+-                # type: (Any, Any) -> Any
+-                return representer.represent_yaml_object(
+-                    tag, data, cls, flow_style=representer.default_flow_style
+-                )
+-
+-            self.representer.add_representer(cls, t_y)
+-        try:
+-            self.constructor.add_constructor(tag, cls.from_yaml)
+-        except AttributeError:
+-
+-            def f_y(constructor, node):
+-                # type: (Any, Any) -> Any
+-                return constructor.construct_yaml_object(node, cls)
+-
+-            self.constructor.add_constructor(tag, f_y)
+-        return cls
+-
+-    def parse(self, stream):
+-        # type: (StreamTextType) -> Any
+-        """
+-        Parse a YAML stream and produce parsing events.
+-        """
+-        _, parser = self.get_constructor_parser(stream)
+-        try:
+-            while parser.check_event():
+-                yield parser.get_event()
+-        finally:
+-            parser.dispose()
+-            try:
+-                self._reader.reset_reader()
+-            except AttributeError:
+-                pass
+-            try:
+-                self._scanner.reset_scanner()
+-            except AttributeError:
+-                pass
+-
+-    # ### context manager
+-
+-    def __enter__(self):
+-        # type: () -> Any
+-        self._context_manager = YAMLContextManager(self)
+-        return self
+-
+-    def __exit__(self, typ, value, traceback):
+-        # type: (Any, Any, Any) -> None
+-        if typ:
+-            nprint('typ', typ)
+-        self._context_manager.teardown_output()
+-        # self._context_manager.teardown_input()
+-        self._context_manager = None
+-
+-    # ### backwards compatibility
+-    def _indent(self, mapping=None, sequence=None, offset=None):
+-        # type: (Any, Any, Any) -> None
+-        if mapping is not None:
+-            self.map_indent = mapping
+-        if sequence is not None:
+-            self.sequence_indent = sequence
+-        if offset is not None:
+-            self.sequence_dash_offset = offset
+-
+-    @property
+-    def indent(self):
+-        # type: () -> Any
+-        return self._indent
+-
+-    @indent.setter
+-    def indent(self, val):
+-        # type: (Any) -> None
+-        self.old_indent = val
+-
+-    @property
+-    def block_seq_indent(self):
+-        # type: () -> Any
+-        return self.sequence_dash_offset
+-
+-    @block_seq_indent.setter
+-    def block_seq_indent(self, val):
+-        # type: (Any) -> None
+-        self.sequence_dash_offset = val
+-
+-    def compact(self, seq_seq=None, seq_map=None):
+-        # type: (Any, Any) -> None
+-        self.compact_seq_seq = seq_seq
+-        self.compact_seq_map = seq_map
+-
+-
+-class YAMLContextManager(object):
+-    def __init__(self, yaml, transform=None):
+-        # type: (Any, Any) -> None  # used to be: (Any, Optional[Callable]) -> None
+-        self._yaml = yaml
+-        self._output_inited = False
+-        self._output_path = None
+-        self._output = self._yaml._output
+-        self._transform = transform
+-
+-        # self._input_inited = False
+-        # self._input = input
+-        # self._input_path = None
+-        # self._transform = yaml.transform
+-        # self._fstream = None
+-
+-        if not hasattr(self._output, 'write') and hasattr(self._output, 'open'):
+-            # pathlib.Path() instance, open with the same mode
+-            self._output_path = self._output
+-            self._output = self._output_path.open('w')
+-
+-        # if not hasattr(self._stream, 'write') and hasattr(stream, 'open'):
+-        # if not hasattr(self._input, 'read') and hasattr(self._input, 'open'):
+-        #    # pathlib.Path() instance, open with the same mode
+-        #    self._input_path = self._input
+-        #    self._input = self._input_path.open('r')
+-
+-        if self._transform is not None:
+-            self._fstream = self._output
+-            if self._yaml.encoding is None:
+-                self._output = StringIO()
+-            else:
+-                self._output = BytesIO()
+-
+-    def teardown_output(self):
+-        # type: () -> None
+-        if self._output_inited:
+-            self._yaml.serializer.close()
+-        else:
+-            return
+-        try:
+-            self._yaml.emitter.dispose()
+-        except AttributeError:
+-            raise
+-            # self.dumper.dispose()  # cyaml
+-        try:
+-            delattr(self._yaml, '_serializer')
+-            delattr(self._yaml, '_emitter')
+-        except AttributeError:
+-            raise
+-        if self._transform:
+-            val = self._output.getvalue()
+-            if self._yaml.encoding:
+-                val = val.decode(self._yaml.encoding)
+-            if self._fstream is None:
+-                self._transform(val)
+-            else:
+-                self._fstream.write(self._transform(val))
+-                self._fstream.flush()
+-                self._output = self._fstream  # maybe not necessary
+-        if self._output_path is not None:
+-            self._output.close()
+-
+-    def init_output(self, first_data):
+-        # type: (Any) -> None
+-        if self._yaml.top_level_colon_align is True:
+-            tlca = max([len(str(x)) for x in first_data])  # type: Any
+-        else:
+-            tlca = self._yaml.top_level_colon_align
+-        self._yaml.get_serializer_representer_emitter(self._output, tlca)
+-        self._yaml.serializer.open()
+-        self._output_inited = True
+-
+-    def dump(self, data):
+-        # type: (Any) -> None
+-        if not self._output_inited:
+-            self.init_output(data)
+-        try:
+-            self._yaml.representer.represent(data)
+-        except AttributeError:
+-            # nprint(dir(dumper._representer))
+-            raise
+-
+-    # def teardown_input(self):
+-    #     pass
+-    #
+-    # def init_input(self):
+-    #     # set the constructor and parser on YAML() instance
+-    #     self._yaml.get_constructor_parser(stream)
+-    #
+-    # def load(self):
+-    #     if not self._input_inited:
+-    #         self.init_input()
+-    #     try:
+-    #         while self._yaml.constructor.check_data():
+-    #             yield self._yaml.constructor.get_data()
+-    #     finally:
+-    #         parser.dispose()
+-    #         try:
+-    #             self._reader.reset_reader()  # type: ignore
+-    #         except AttributeError:
+-    #             pass
+-    #         try:
+-    #             self._scanner.reset_scanner()  # type: ignore
+-    #         except AttributeError:
+-    #             pass
+-
+-
+-def yaml_object(yml):
+-    # type: (Any) -> Any
+-    """ decorator for classes that needs to dump/load objects
+-    The tag for such objects is taken from the class attribute yaml_tag (or the
+-    class name in lowercase in case unavailable)
+-    If methods to_yaml and/or from_yaml are available, these are called for dumping resp.
+-    loading, default routines (dumping a mapping of the attributes) used otherwise.
+-    """
+-
+-    def yo_deco(cls):
+-        # type: (Any) -> Any
+-        tag = getattr(cls, 'yaml_tag', '!' + cls.__name__)
+-        try:
+-            yml.representer.add_representer(cls, cls.to_yaml)
+-        except AttributeError:
+-
+-            def t_y(representer, data):
+-                # type: (Any, Any) -> Any
+-                return representer.represent_yaml_object(
+-                    tag, data, cls, flow_style=representer.default_flow_style
+-                )
+-
+-            yml.representer.add_representer(cls, t_y)
+-        try:
+-            yml.constructor.add_constructor(tag, cls.from_yaml)
+-        except AttributeError:
+-
+-            def f_y(constructor, node):
+-                # type: (Any, Any) -> Any
+-                return constructor.construct_yaml_object(node, cls)
+-
+-            yml.constructor.add_constructor(tag, f_y)
+-        return cls
+-
+-    return yo_deco
+-
+-
+-########################################################################################
+-
+-
+-def scan(stream, Loader=Loader):
+-    # type: (StreamTextType, Any) -> Any
+-    """
+-    Scan a YAML stream and produce scanning tokens.
+-    """
+-    loader = Loader(stream)
+-    try:
+-        while loader.scanner.check_token():
+-            yield loader.scanner.get_token()
+-    finally:
+-        loader._parser.dispose()
+-
+-
+-def parse(stream, Loader=Loader):
+-    # type: (StreamTextType, Any) -> Any
+-    """
+-    Parse a YAML stream and produce parsing events.
+-    """
+-    loader = Loader(stream)
+-    try:
+-        while loader._parser.check_event():
+-            yield loader._parser.get_event()
+-    finally:
+-        loader._parser.dispose()
+-
+-
+-def compose(stream, Loader=Loader):
+-    # type: (StreamTextType, Any) -> Any
+-    """
+-    Parse the first YAML document in a stream
+-    and produce the corresponding representation tree.
+-    """
+-    loader = Loader(stream)
+-    try:
+-        return loader.get_single_node()
+-    finally:
+-        loader.dispose()
+-
+-
+-def compose_all(stream, Loader=Loader):
+-    # type: (StreamTextType, Any) -> Any
+-    """
+-    Parse all YAML documents in a stream
+-    and produce corresponding representation trees.
+-    """
+-    loader = Loader(stream)
+-    try:
+-        while loader.check_node():
+-            yield loader._composer.get_node()
+-    finally:
+-        loader._parser.dispose()
+-
+-
+-def load(stream, Loader=None, version=None, preserve_quotes=None):
+-    # type: (StreamTextType, Any, Optional[VersionType], Any) -> Any
+-    """
+-    Parse the first YAML document in a stream
+-    and produce the corresponding Python object.
+-    """
+-    if Loader is None:
+-        warnings.warn(UnsafeLoaderWarning.text, UnsafeLoaderWarning, stacklevel=2)
+-        Loader = UnsafeLoader
+-    loader = Loader(stream, version, preserve_quotes=preserve_quotes)
+-    try:
+-        return loader._constructor.get_single_data()
+-    finally:
+-        loader._parser.dispose()
+-        try:
+-            loader._reader.reset_reader()
+-        except AttributeError:
+-            pass
+-        try:
+-            loader._scanner.reset_scanner()
+-        except AttributeError:
+-            pass
+-
+-
+-def load_all(stream, Loader=None, version=None, preserve_quotes=None):
+-    # type: (Optional[StreamTextType], Any, Optional[VersionType], Optional[bool]) -> Any  # NOQA
+-    """
+-    Parse all YAML documents in a stream
+-    and produce corresponding Python objects.
+-    """
+-    if Loader is None:
+-        warnings.warn(UnsafeLoaderWarning.text, UnsafeLoaderWarning, stacklevel=2)
+-        Loader = UnsafeLoader
+-    loader = Loader(stream, version, preserve_quotes=preserve_quotes)
+-    try:
+-        while loader._constructor.check_data():
+-            yield loader._constructor.get_data()
+-    finally:
+-        loader._parser.dispose()
+-        try:
+-            loader._reader.reset_reader()
+-        except AttributeError:
+-            pass
+-        try:
+-            loader._scanner.reset_scanner()
+-        except AttributeError:
+-            pass
+-
+-
+-def safe_load(stream, version=None):
+-    # type: (StreamTextType, Optional[VersionType]) -> Any
+-    """
+-    Parse the first YAML document in a stream
+-    and produce the corresponding Python object.
+-    Resolve only basic YAML tags.
+-    """
+-    return load(stream, SafeLoader, version)
+-
+-
+-def safe_load_all(stream, version=None):
+-    # type: (StreamTextType, Optional[VersionType]) -> Any
+-    """
+-    Parse all YAML documents in a stream
+-    and produce corresponding Python objects.
+-    Resolve only basic YAML tags.
+-    """
+-    return load_all(stream, SafeLoader, version)
+-
+-
+-def round_trip_load(stream, version=None, preserve_quotes=None):
+-    # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> Any
+-    """
+-    Parse the first YAML document in a stream
+-    and produce the corresponding Python object.
+-    Resolve only basic YAML tags.
+-    """
+-    return load(stream, RoundTripLoader, version, preserve_quotes=preserve_quotes)
+-
+-
+-def round_trip_load_all(stream, version=None, preserve_quotes=None):
+-    # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> Any
+-    """
+-    Parse all YAML documents in a stream
+-    and produce corresponding Python objects.
+-    Resolve only basic YAML tags.
+-    """
+-    return load_all(stream, RoundTripLoader, version, preserve_quotes=preserve_quotes)
+-
+-
+-def emit(
+-    events,
+-    stream=None,
+-    Dumper=Dumper,
+-    canonical=None,
+-    indent=None,
+-    width=None,
+-    allow_unicode=None,
+-    line_break=None,
+-):
+-    # type: (Any, Optional[StreamType], Any, Optional[bool], Union[int, None], Optional[int], Optional[bool], Any) -> Any  # NOQA
+-    """
+-    Emit YAML parsing events into a stream.
+-    If stream is None, return the produced string instead.
+-    """
+-    getvalue = None
+-    if stream is None:
+-        stream = StringIO()
+-        getvalue = stream.getvalue
+-    dumper = Dumper(
+-        stream,
+-        canonical=canonical,
+-        indent=indent,
+-        width=width,
+-        allow_unicode=allow_unicode,
+-        line_break=line_break,
+-    )
+-    try:
+-        for event in events:
+-            dumper.emit(event)
+-    finally:
+-        try:
+-            dumper._emitter.dispose()
+-        except AttributeError:
+-            raise
+-            dumper.dispose()  # cyaml
+-    if getvalue is not None:
+-        return getvalue()
+-
+-
+-enc = None if PY3 else 'utf-8'
+-
+-
+-def serialize_all(
+-    nodes,
+-    stream=None,
+-    Dumper=Dumper,
+-    canonical=None,
+-    indent=None,
+-    width=None,
+-    allow_unicode=None,
+-    line_break=None,
+-    encoding=enc,
+-    explicit_start=None,
+-    explicit_end=None,
+-    version=None,
+-    tags=None,
+-):
+-    # type: (Any, Optional[StreamType], Any, Any, Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Optional[VersionType], Any) -> Any # NOQA
+-    """
+-    Serialize a sequence of representation trees into a YAML stream.
+-    If stream is None, return the produced string instead.
+-    """
+-    getvalue = None
+-    if stream is None:
+-        if encoding is None:
+-            stream = StringIO()
+-        else:
+-            stream = BytesIO()
+-        getvalue = stream.getvalue
+-    dumper = Dumper(
+-        stream,
+-        canonical=canonical,
+-        indent=indent,
+-        width=width,
+-        allow_unicode=allow_unicode,
+-        line_break=line_break,
+-        encoding=encoding,
+-        version=version,
+-        tags=tags,
+-        explicit_start=explicit_start,
+-        explicit_end=explicit_end,
+-    )
+-    try:
+-        dumper._serializer.open()
+-        for node in nodes:
+-            dumper.serialize(node)
+-        dumper._serializer.close()
+-    finally:
+-        try:
+-            dumper._emitter.dispose()
+-        except AttributeError:
+-            raise
+-            dumper.dispose()  # cyaml
+-    if getvalue is not None:
+-        return getvalue()
+-
+-
+-def serialize(node, stream=None, Dumper=Dumper, **kwds):
+-    # type: (Any, Optional[StreamType], Any, Any) -> Any
+-    """
+-    Serialize a representation tree into a YAML stream.
+-    If stream is None, return the produced string instead.
+-    """
+-    return serialize_all([node], stream, Dumper=Dumper, **kwds)
+-
+-
+-def dump_all(
+-    documents,
+-    stream=None,
+-    Dumper=Dumper,
+-    default_style=None,
+-    default_flow_style=None,
+-    canonical=None,
+-    indent=None,
+-    width=None,
+-    allow_unicode=None,
+-    line_break=None,
+-    encoding=enc,
+-    explicit_start=None,
+-    explicit_end=None,
+-    version=None,
+-    tags=None,
+-    block_seq_indent=None,
+-    top_level_colon_align=None,
+-    prefix_colon=None,
+-):
+-    # type: (Any, Optional[StreamType], Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> Optional[str]   # NOQA
+-    """
+-    Serialize a sequence of Python objects into a YAML stream.
+-    If stream is None, return the produced string instead.
+-    """
+-    getvalue = None
+-    if top_level_colon_align is True:
+-        top_level_colon_align = max([len(str(x)) for x in documents[0]])
+-    if stream is None:
+-        if encoding is None:
+-            stream = StringIO()
+-        else:
+-            stream = BytesIO()
+-        getvalue = stream.getvalue
+-    dumper = Dumper(
+-        stream,
+-        default_style=default_style,
+-        default_flow_style=default_flow_style,
+-        canonical=canonical,
+-        indent=indent,
+-        width=width,
+-        allow_unicode=allow_unicode,
+-        line_break=line_break,
+-        encoding=encoding,
+-        explicit_start=explicit_start,
+-        explicit_end=explicit_end,
+-        version=version,
+-        tags=tags,
+-        block_seq_indent=block_seq_indent,
+-        top_level_colon_align=top_level_colon_align,
+-        prefix_colon=prefix_colon,
+-    )
+-    try:
+-        dumper._serializer.open()
+-        for data in documents:
+-            try:
+-                dumper._representer.represent(data)
+-            except AttributeError:
+-                # nprint(dir(dumper._representer))
+-                raise
+-        dumper._serializer.close()
+-    finally:
+-        try:
+-            dumper._emitter.dispose()
+-        except AttributeError:
+-            raise
+-            dumper.dispose()  # cyaml
+-    if getvalue is not None:
+-        return getvalue()
+-    return None
+-
+-
+-def dump(
+-    data,
+-    stream=None,
+-    Dumper=Dumper,
+-    default_style=None,
+-    default_flow_style=None,
+-    canonical=None,
+-    indent=None,
+-    width=None,
+-    allow_unicode=None,
+-    line_break=None,
+-    encoding=enc,
+-    explicit_start=None,
+-    explicit_end=None,
+-    version=None,
+-    tags=None,
+-    block_seq_indent=None,
+-):
+-    # type: (Any, Optional[StreamType], Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Optional[VersionType], Any, Any) -> Optional[str]   # NOQA
+-    """
+-    Serialize a Python object into a YAML stream.
+-    If stream is None, return the produced string instead.
+-
+-    default_style ∈ None, '', '"', "'", '|', '>'
+-
+-    """
+-    return dump_all(
+-        [data],
+-        stream,
+-        Dumper=Dumper,
+-        default_style=default_style,
+-        default_flow_style=default_flow_style,
+-        canonical=canonical,
+-        indent=indent,
+-        width=width,
+-        allow_unicode=allow_unicode,
+-        line_break=line_break,
+-        encoding=encoding,
+-        explicit_start=explicit_start,
+-        explicit_end=explicit_end,
+-        version=version,
+-        tags=tags,
+-        block_seq_indent=block_seq_indent,
+-    )
+-
+-
+-def safe_dump_all(documents, stream=None, **kwds):
+-    # type: (Any, Optional[StreamType], Any) -> Optional[str]
+-    """
+-    Serialize a sequence of Python objects into a YAML stream.
+-    Produce only basic YAML tags.
+-    If stream is None, return the produced string instead.
+-    """
+-    return dump_all(documents, stream, Dumper=SafeDumper, **kwds)
+-
+-
+-def safe_dump(data, stream=None, **kwds):
+-    # type: (Any, Optional[StreamType], Any) -> Optional[str]
+-    """
+-    Serialize a Python object into a YAML stream.
+-    Produce only basic YAML tags.
+-    If stream is None, return the produced string instead.
+-    """
+-    return dump_all([data], stream, Dumper=SafeDumper, **kwds)
+-
+-
+-def round_trip_dump(
+-    data,
+-    stream=None,
+-    Dumper=RoundTripDumper,
+-    default_style=None,
+-    default_flow_style=None,
+-    canonical=None,
+-    indent=None,
+-    width=None,
+-    allow_unicode=None,
+-    line_break=None,
+-    encoding=enc,
+-    explicit_start=None,
+-    explicit_end=None,
+-    version=None,
+-    tags=None,
+-    block_seq_indent=None,
+-    top_level_colon_align=None,
+-    prefix_colon=None,
+-):
+-    # type: (Any, Optional[StreamType], Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Optional[VersionType], Any, Any, Any, Any) -> Optional[str]   # NOQA
+-    allow_unicode = True if allow_unicode is None else allow_unicode
+-    return dump_all(
+-        [data],
+-        stream,
+-        Dumper=Dumper,
+-        default_style=default_style,
+-        default_flow_style=default_flow_style,
+-        canonical=canonical,
+-        indent=indent,
+-        width=width,
+-        allow_unicode=allow_unicode,
+-        line_break=line_break,
+-        encoding=encoding,
+-        explicit_start=explicit_start,
+-        explicit_end=explicit_end,
+-        version=version,
+-        tags=tags,
+-        block_seq_indent=block_seq_indent,
+-        top_level_colon_align=top_level_colon_align,
+-        prefix_colon=prefix_colon,
+-    )
+-
+-
+-# Loader/Dumper are no longer composites, to get to the associated
+-# Resolver()/Representer(), etc., you need to instantiate the class
+-
+-
+-def add_implicit_resolver(
+-    tag, regexp, first=None, Loader=None, Dumper=None, resolver=Resolver
+-):
+-    # type: (Any, Any, Any, Any, Any, Any) -> None
+-    """
+-    Add an implicit scalar detector.
+-    If an implicit scalar value matches the given regexp,
+-    the corresponding tag is assigned to the scalar.
+-    first is a sequence of possible initial characters or None.
+-    """
+-    if Loader is None and Dumper is None:
+-        resolver.add_implicit_resolver(tag, regexp, first)
+-        return
+-    if Loader:
+-        if hasattr(Loader, 'add_implicit_resolver'):
+-            Loader.add_implicit_resolver(tag, regexp, first)
+-        elif issubclass(
+-            Loader, (BaseLoader, SafeLoader, ruamel.yaml.loader.Loader, RoundTripLoader)
+-        ):
+-            Resolver.add_implicit_resolver(tag, regexp, first)
+-        else:
+-            raise NotImplementedError
+-    if Dumper:
+-        if hasattr(Dumper, 'add_implicit_resolver'):
+-            Dumper.add_implicit_resolver(tag, regexp, first)
+-        elif issubclass(
+-            Dumper, (BaseDumper, SafeDumper, ruamel.yaml.dumper.Dumper, RoundTripDumper)
+-        ):
+-            Resolver.add_implicit_resolver(tag, regexp, first)
+-        else:
+-            raise NotImplementedError
+-
+-
+-# this code currently not tested
+-def add_path_resolver(tag, path, kind=None, Loader=None, Dumper=None, resolver=Resolver):
+-    # type: (Any, Any, Any, Any, Any, Any) -> None
+-    """
+-    Add a path based resolver for the given tag.
+-    A path is a list of keys that forms a path
+-    to a node in the representation tree.
+-    Keys can be string values, integers, or None.
+-    """
+-    if Loader is None and Dumper is None:
+-        resolver.add_path_resolver(tag, path, kind)
+-        return
+-    if Loader:
+-        if hasattr(Loader, 'add_path_resolver'):
+-            Loader.add_path_resolver(tag, path, kind)
+-        elif issubclass(
+-            Loader, (BaseLoader, SafeLoader, ruamel.yaml.loader.Loader, RoundTripLoader)
+-        ):
+-            Resolver.add_path_resolver(tag, path, kind)
+-        else:
+-            raise NotImplementedError
+-    if Dumper:
+-        if hasattr(Dumper, 'add_path_resolver'):
+-            Dumper.add_path_resolver(tag, path, kind)
+-        elif issubclass(
+-            Dumper, (BaseDumper, SafeDumper, ruamel.yaml.dumper.Dumper, RoundTripDumper)
+-        ):
+-            Resolver.add_path_resolver(tag, path, kind)
+-        else:
+-            raise NotImplementedError
+-
+-
+-def add_constructor(tag, object_constructor, Loader=None, constructor=Constructor):
+-    # type: (Any, Any, Any, Any) -> None
+-    """
+-    Add an object constructor for the given tag.
+-    object_onstructor is a function that accepts a Loader instance
+-    and a node object and produces the corresponding Python object.
+-    """
+-    if Loader is None:
+-        constructor.add_constructor(tag, object_constructor)
+-    else:
+-        if hasattr(Loader, 'add_constructor'):
+-            Loader.add_constructor(tag, object_constructor)
+-            return
+-        if issubclass(Loader, BaseLoader):
+-            BaseConstructor.add_constructor(tag, object_constructor)
+-        elif issubclass(Loader, SafeLoader):
+-            SafeConstructor.add_constructor(tag, object_constructor)
+-        elif issubclass(Loader, Loader):
+-            Constructor.add_constructor(tag, object_constructor)
+-        elif issubclass(Loader, RoundTripLoader):
+-            RoundTripConstructor.add_constructor(tag, object_constructor)
+-        else:
+-            raise NotImplementedError
+-
+-
+-def add_multi_constructor(tag_prefix, multi_constructor, Loader=None, constructor=Constructor):
+-    # type: (Any, Any, Any, Any) -> None
+-    """
+-    Add a multi-constructor for the given tag prefix.
+-    Multi-constructor is called for a node if its tag starts with tag_prefix.
+-    Multi-constructor accepts a Loader instance, a tag suffix,
+-    and a node object and produces the corresponding Python object.
+-    """
+-    if Loader is None:
+-        constructor.add_multi_constructor(tag_prefix, multi_constructor)
+-    else:
+-        if False and hasattr(Loader, 'add_multi_constructor'):
+-            Loader.add_multi_constructor(tag_prefix, constructor)
+-            return
+-        if issubclass(Loader, BaseLoader):
+-            BaseConstructor.add_multi_constructor(tag_prefix, multi_constructor)
+-        elif issubclass(Loader, SafeLoader):
+-            SafeConstructor.add_multi_constructor(tag_prefix, multi_constructor)
+-        elif issubclass(Loader, ruamel.yaml.loader.Loader):
+-            Constructor.add_multi_constructor(tag_prefix, multi_constructor)
+-        elif issubclass(Loader, RoundTripLoader):
+-            RoundTripConstructor.add_multi_constructor(tag_prefix, multi_constructor)
+-        else:
+-            raise NotImplementedError
+-
+-
+-def add_representer(data_type, object_representer, Dumper=None, representer=Representer):
+-    # type: (Any, Any, Any, Any) -> None
+-    """
+-    Add a representer for the given type.
+-    object_representer is a function accepting a Dumper instance
+-    and an instance of the given data type
+-    and producing the corresponding representation node.
+-    """
+-    if Dumper is None:
+-        representer.add_representer(data_type, object_representer)
+-    else:
+-        if hasattr(Dumper, 'add_representer'):
+-            Dumper.add_representer(data_type, object_representer)
+-            return
+-        if issubclass(Dumper, BaseDumper):
+-            BaseRepresenter.add_representer(data_type, object_representer)
+-        elif issubclass(Dumper, SafeDumper):
+-            SafeRepresenter.add_representer(data_type, object_representer)
+-        elif issubclass(Dumper, Dumper):
+-            Representer.add_representer(data_type, object_representer)
+-        elif issubclass(Dumper, RoundTripDumper):
+-            RoundTripRepresenter.add_representer(data_type, object_representer)
+-        else:
+-            raise NotImplementedError
+-
+-
+-# this code currently not tested
+-def add_multi_representer(data_type, multi_representer, Dumper=None, representer=Representer):
+-    # type: (Any, Any, Any, Any) -> None
+-    """
+-    Add a representer for the given type.
+-    multi_representer is a function accepting a Dumper instance
+-    and an instance of the given data type or subtype
+-    and producing the corresponding representation node.
+-    """
+-    if Dumper is None:
+-        representer.add_multi_representer(data_type, multi_representer)
+-    else:
+-        if hasattr(Dumper, 'add_multi_representer'):
+-            Dumper.add_multi_representer(data_type, multi_representer)
+-            return
+-        if issubclass(Dumper, BaseDumper):
+-            BaseRepresenter.add_multi_representer(data_type, multi_representer)
+-        elif issubclass(Dumper, SafeDumper):
+-            SafeRepresenter.add_multi_representer(data_type, multi_representer)
+-        elif issubclass(Dumper, Dumper):
+-            Representer.add_multi_representer(data_type, multi_representer)
+-        elif issubclass(Dumper, RoundTripDumper):
+-            RoundTripRepresenter.add_multi_representer(data_type, multi_representer)
+-        else:
+-            raise NotImplementedError
+-
+-
+-class YAMLObjectMetaclass(type):
+-    """
+-    The metaclass for YAMLObject.
+-    """
+-
+-    def __init__(cls, name, bases, kwds):
+-        # type: (Any, Any, Any) -> None
+-        super(YAMLObjectMetaclass, cls).__init__(name, bases, kwds)
+-        if 'yaml_tag' in kwds and kwds['yaml_tag'] is not None:
+-            cls.yaml_constructor.add_constructor(cls.yaml_tag, cls.from_yaml)  # type: ignore
+-            cls.yaml_representer.add_representer(cls, cls.to_yaml)  # type: ignore
+-
+-
+-class YAMLObject(with_metaclass(YAMLObjectMetaclass)):  # type: ignore
+-    """
+-    An object that can dump itself to a YAML stream
+-    and load itself from a YAML stream.
+-    """
+-
+-    __slots__ = ()  # no direct instantiation, so allow immutable subclasses
+-
+-    yaml_constructor = Constructor
+-    yaml_representer = Representer
+-
+-    yaml_tag = None  # type: Any
+-    yaml_flow_style = None  # type: Any
+-
+-    @classmethod
+-    def from_yaml(cls, constructor, node):
+-        # type: (Any, Any) -> Any
+-        """
+-        Convert a representation node to a Python object.
+-        """
+-        return constructor.construct_yaml_object(node, cls)
+-
+-    @classmethod
+-    def to_yaml(cls, representer, data):
+-        # type: (Any, Any) -> Any
+-        """
+-        Convert a Python object to a representation node.
+-        """
+-        return representer.represent_yaml_object(
+-            cls.yaml_tag, data, cls, flow_style=cls.yaml_flow_style
+-        )
+diff --git a/dynaconf/vendor_src/ruamel/yaml/nodes.py b/dynaconf/vendor_src/ruamel/yaml/nodes.py
+deleted file mode 100644
+index da86e9c..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/nodes.py
++++ /dev/null
+@@ -1,131 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function
+-
+-import sys
+-from .compat import string_types
+-
+-if False:  # MYPY
+-    from typing import Dict, Any, Text  # NOQA
+-
+-
+-class Node(object):
+-    __slots__ = 'tag', 'value', 'start_mark', 'end_mark', 'comment', 'anchor'
+-
+-    def __init__(self, tag, value, start_mark, end_mark, comment=None, anchor=None):
+-        # type: (Any, Any, Any, Any, Any, Any) -> None
+-        self.tag = tag
+-        self.value = value
+-        self.start_mark = start_mark
+-        self.end_mark = end_mark
+-        self.comment = comment
+-        self.anchor = anchor
+-
+-    def __repr__(self):
+-        # type: () -> str
+-        value = self.value
+-        # if isinstance(value, list):
+-        #     if len(value) == 0:
+-        #         value = '<empty>'
+-        #     elif len(value) == 1:
+-        #         value = '<1 item>'
+-        #     else:
+-        #         value = '<%d items>' % len(value)
+-        # else:
+-        #     if len(value) > 75:
+-        #         value = repr(value[:70]+u' ... ')
+-        #     else:
+-        #         value = repr(value)
+-        value = repr(value)
+-        return '%s(tag=%r, value=%s)' % (self.__class__.__name__, self.tag, value)
+-
+-    def dump(self, indent=0):
+-        # type: (int) -> None
+-        if isinstance(self.value, string_types):
+-            sys.stdout.write(
+-                '{}{}(tag={!r}, value={!r})\n'.format(
+-                    '  ' * indent, self.__class__.__name__, self.tag, self.value
+-                )
+-            )
+-            if self.comment:
+-                sys.stdout.write('    {}comment: {})\n'.format('  ' * indent, self.comment))
+-            return
+-        sys.stdout.write(
+-            '{}{}(tag={!r})\n'.format('  ' * indent, self.__class__.__name__, self.tag)
+-        )
+-        if self.comment:
+-            sys.stdout.write('    {}comment: {})\n'.format('  ' * indent, self.comment))
+-        for v in self.value:
+-            if isinstance(v, tuple):
+-                for v1 in v:
+-                    v1.dump(indent + 1)
+-            elif isinstance(v, Node):
+-                v.dump(indent + 1)
+-            else:
+-                sys.stdout.write('Node value type? {}\n'.format(type(v)))
+-
+-
+-class ScalarNode(Node):
+-    """
+-    styles:
+-      ? -> set() ? key, no value
+-      " -> double quoted
+-      ' -> single quoted
+-      | -> literal style
+-      > -> folding style
+-    """
+-
+-    __slots__ = ('style',)
+-    id = 'scalar'
+-
+-    def __init__(
+-        self, tag, value, start_mark=None, end_mark=None, style=None, comment=None, anchor=None
+-    ):
+-        # type: (Any, Any, Any, Any, Any, Any, Any) -> None
+-        Node.__init__(self, tag, value, start_mark, end_mark, comment=comment, anchor=anchor)
+-        self.style = style
+-
+-
+-class CollectionNode(Node):
+-    __slots__ = ('flow_style',)
+-
+-    def __init__(
+-        self,
+-        tag,
+-        value,
+-        start_mark=None,
+-        end_mark=None,
+-        flow_style=None,
+-        comment=None,
+-        anchor=None,
+-    ):
+-        # type: (Any, Any, Any, Any, Any, Any, Any) -> None
+-        Node.__init__(self, tag, value, start_mark, end_mark, comment=comment)
+-        self.flow_style = flow_style
+-        self.anchor = anchor
+-
+-
+-class SequenceNode(CollectionNode):
+-    __slots__ = ()
+-    id = 'sequence'
+-
+-
+-class MappingNode(CollectionNode):
+-    __slots__ = ('merge',)
+-    id = 'mapping'
+-
+-    def __init__(
+-        self,
+-        tag,
+-        value,
+-        start_mark=None,
+-        end_mark=None,
+-        flow_style=None,
+-        comment=None,
+-        anchor=None,
+-    ):
+-        # type: (Any, Any, Any, Any, Any, Any, Any) -> None
+-        CollectionNode.__init__(
+-            self, tag, value, start_mark, end_mark, flow_style, comment, anchor
+-        )
+-        self.merge = None
+diff --git a/dynaconf/vendor_src/ruamel/yaml/parser.py b/dynaconf/vendor_src/ruamel/yaml/parser.py
+deleted file mode 100644
+index 3d67a1c..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/parser.py
++++ /dev/null
+@@ -1,802 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import
+-
+-# The following YAML grammar is LL(1) and is parsed by a recursive descent
+-# parser.
+-#
+-# stream            ::= STREAM-START implicit_document? explicit_document*
+-#                                                                   STREAM-END
+-# implicit_document ::= block_node DOCUMENT-END*
+-# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+-# block_node_or_indentless_sequence ::=
+-#                       ALIAS
+-#                       | properties (block_content |
+-#                                                   indentless_block_sequence)?
+-#                       | block_content
+-#                       | indentless_block_sequence
+-# block_node        ::= ALIAS
+-#                       | properties block_content?
+-#                       | block_content
+-# flow_node         ::= ALIAS
+-#                       | properties flow_content?
+-#                       | flow_content
+-# properties        ::= TAG ANCHOR? | ANCHOR TAG?
+-# block_content     ::= block_collection | flow_collection | SCALAR
+-# flow_content      ::= flow_collection | SCALAR
+-# block_collection  ::= block_sequence | block_mapping
+-# flow_collection   ::= flow_sequence | flow_mapping
+-# block_sequence    ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)*
+-#                                                                   BLOCK-END
+-# indentless_sequence   ::= (BLOCK-ENTRY block_node?)+
+-# block_mapping     ::= BLOCK-MAPPING_START
+-#                       ((KEY block_node_or_indentless_sequence?)?
+-#                       (VALUE block_node_or_indentless_sequence?)?)*
+-#                       BLOCK-END
+-# flow_sequence     ::= FLOW-SEQUENCE-START
+-#                       (flow_sequence_entry FLOW-ENTRY)*
+-#                       flow_sequence_entry?
+-#                       FLOW-SEQUENCE-END
+-# flow_sequence_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-# flow_mapping      ::= FLOW-MAPPING-START
+-#                       (flow_mapping_entry FLOW-ENTRY)*
+-#                       flow_mapping_entry?
+-#                       FLOW-MAPPING-END
+-# flow_mapping_entry    ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-#
+-# FIRST sets:
+-#
+-# stream: { STREAM-START }
+-# explicit_document: { DIRECTIVE DOCUMENT-START }
+-# implicit_document: FIRST(block_node)
+-# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START
+-#                  BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
+-# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
+-# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START
+-#                               FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
+-# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
+-# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
+-# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
+-# block_sequence: { BLOCK-SEQUENCE-START }
+-# block_mapping: { BLOCK-MAPPING-START }
+-# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR
+-#               BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START
+-#               FLOW-MAPPING-START BLOCK-ENTRY }
+-# indentless_sequence: { ENTRY }
+-# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
+-# flow_sequence: { FLOW-SEQUENCE-START }
+-# flow_mapping: { FLOW-MAPPING-START }
+-# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START
+-#                                                    FLOW-MAPPING-START KEY }
+-# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START
+-#                                                    FLOW-MAPPING-START KEY }
+-
+-# need to have full path with import, as pkg_resources tries to load parser.py in __init__.py
+-# only to not do anything with the package afterwards
+-# and for Jython too
+-
+-
+-from .error import MarkedYAMLError
+-from .tokens import *  # NOQA
+-from .events import *  # NOQA
+-from .scanner import Scanner, RoundTripScanner, ScannerError  # NOQA
+-from .compat import utf8, nprint, nprintf  # NOQA
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List  # NOQA
+-
+-__all__ = ['Parser', 'RoundTripParser', 'ParserError']
+-
+-
+-class ParserError(MarkedYAMLError):
+-    pass
+-
+-
+-class Parser(object):
+-    # Since writing a recursive-descendant parser is a straightforward task, we
+-    # do not give many comments here.
+-
+-    DEFAULT_TAGS = {u'!': u'!', u'!!': u'tag:yaml.org,2002:'}
+-
+-    def __init__(self, loader):
+-        # type: (Any) -> None
+-        self.loader = loader
+-        if self.loader is not None and getattr(self.loader, '_parser', None) is None:
+-            self.loader._parser = self
+-        self.reset_parser()
+-
+-    def reset_parser(self):
+-        # type: () -> None
+-        # Reset the state attributes (to clear self-references)
+-        self.current_event = None
+-        self.tag_handles = {}  # type: Dict[Any, Any]
+-        self.states = []  # type: List[Any]
+-        self.marks = []  # type: List[Any]
+-        self.state = self.parse_stream_start  # type: Any
+-
+-    def dispose(self):
+-        # type: () -> None
+-        self.reset_parser()
+-
+-    @property
+-    def scanner(self):
+-        # type: () -> Any
+-        if hasattr(self.loader, 'typ'):
+-            return self.loader.scanner
+-        return self.loader._scanner
+-
+-    @property
+-    def resolver(self):
+-        # type: () -> Any
+-        if hasattr(self.loader, 'typ'):
+-            return self.loader.resolver
+-        return self.loader._resolver
+-
+-    def check_event(self, *choices):
+-        # type: (Any) -> bool
+-        # Check the type of the next event.
+-        if self.current_event is None:
+-            if self.state:
+-                self.current_event = self.state()
+-        if self.current_event is not None:
+-            if not choices:
+-                return True
+-            for choice in choices:
+-                if isinstance(self.current_event, choice):
+-                    return True
+-        return False
+-
+-    def peek_event(self):
+-        # type: () -> Any
+-        # Get the next event.
+-        if self.current_event is None:
+-            if self.state:
+-                self.current_event = self.state()
+-        return self.current_event
+-
+-    def get_event(self):
+-        # type: () -> Any
+-        # Get the next event and proceed further.
+-        if self.current_event is None:
+-            if self.state:
+-                self.current_event = self.state()
+-        value = self.current_event
+-        self.current_event = None
+-        return value
+-
+-    # stream    ::= STREAM-START implicit_document? explicit_document*
+-    #                                                               STREAM-END
+-    # implicit_document ::= block_node DOCUMENT-END*
+-    # explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
+-
+-    def parse_stream_start(self):
+-        # type: () -> Any
+-        # Parse the stream start.
+-        token = self.scanner.get_token()
+-        token.move_comment(self.scanner.peek_token())
+-        event = StreamStartEvent(token.start_mark, token.end_mark, encoding=token.encoding)
+-
+-        # Prepare the next state.
+-        self.state = self.parse_implicit_document_start
+-
+-        return event
+-
+-    def parse_implicit_document_start(self):
+-        # type: () -> Any
+-        # Parse an implicit document.
+-        if not self.scanner.check_token(DirectiveToken, DocumentStartToken, StreamEndToken):
+-            self.tag_handles = self.DEFAULT_TAGS
+-            token = self.scanner.peek_token()
+-            start_mark = end_mark = token.start_mark
+-            event = DocumentStartEvent(start_mark, end_mark, explicit=False)
+-
+-            # Prepare the next state.
+-            self.states.append(self.parse_document_end)
+-            self.state = self.parse_block_node
+-
+-            return event
+-
+-        else:
+-            return self.parse_document_start()
+-
+-    def parse_document_start(self):
+-        # type: () -> Any
+-        # Parse any extra document end indicators.
+-        while self.scanner.check_token(DocumentEndToken):
+-            self.scanner.get_token()
+-        # Parse an explicit document.
+-        if not self.scanner.check_token(StreamEndToken):
+-            token = self.scanner.peek_token()
+-            start_mark = token.start_mark
+-            version, tags = self.process_directives()
+-            if not self.scanner.check_token(DocumentStartToken):
+-                raise ParserError(
+-                    None,
+-                    None,
+-                    "expected '<document start>', but found %r" % self.scanner.peek_token().id,
+-                    self.scanner.peek_token().start_mark,
+-                )
+-            token = self.scanner.get_token()
+-            end_mark = token.end_mark
+-            # if self.loader is not None and \
+-            #    end_mark.line != self.scanner.peek_token().start_mark.line:
+-            #     self.loader.scalar_after_indicator = False
+-            event = DocumentStartEvent(
+-                start_mark, end_mark, explicit=True, version=version, tags=tags
+-            )  # type: Any
+-            self.states.append(self.parse_document_end)
+-            self.state = self.parse_document_content
+-        else:
+-            # Parse the end of the stream.
+-            token = self.scanner.get_token()
+-            event = StreamEndEvent(token.start_mark, token.end_mark, comment=token.comment)
+-            assert not self.states
+-            assert not self.marks
+-            self.state = None
+-        return event
+-
+-    def parse_document_end(self):
+-        # type: () -> Any
+-        # Parse the document end.
+-        token = self.scanner.peek_token()
+-        start_mark = end_mark = token.start_mark
+-        explicit = False
+-        if self.scanner.check_token(DocumentEndToken):
+-            token = self.scanner.get_token()
+-            end_mark = token.end_mark
+-            explicit = True
+-        event = DocumentEndEvent(start_mark, end_mark, explicit=explicit)
+-
+-        # Prepare the next state.
+-        if self.resolver.processing_version == (1, 1):
+-            self.state = self.parse_document_start
+-        else:
+-            self.state = self.parse_implicit_document_start
+-
+-        return event
+-
+-    def parse_document_content(self):
+-        # type: () -> Any
+-        if self.scanner.check_token(
+-            DirectiveToken, DocumentStartToken, DocumentEndToken, StreamEndToken
+-        ):
+-            event = self.process_empty_scalar(self.scanner.peek_token().start_mark)
+-            self.state = self.states.pop()
+-            return event
+-        else:
+-            return self.parse_block_node()
+-
+-    def process_directives(self):
+-        # type: () -> Any
+-        yaml_version = None
+-        self.tag_handles = {}
+-        while self.scanner.check_token(DirectiveToken):
+-            token = self.scanner.get_token()
+-            if token.name == u'YAML':
+-                if yaml_version is not None:
+-                    raise ParserError(
+-                        None, None, 'found duplicate YAML directive', token.start_mark
+-                    )
+-                major, minor = token.value
+-                if major != 1:
+-                    raise ParserError(
+-                        None,
+-                        None,
+-                        'found incompatible YAML document (version 1.* is ' 'required)',
+-                        token.start_mark,
+-                    )
+-                yaml_version = token.value
+-            elif token.name == u'TAG':
+-                handle, prefix = token.value
+-                if handle in self.tag_handles:
+-                    raise ParserError(
+-                        None, None, 'duplicate tag handle %r' % utf8(handle), token.start_mark
+-                    )
+-                self.tag_handles[handle] = prefix
+-        if bool(self.tag_handles):
+-            value = yaml_version, self.tag_handles.copy()  # type: Any
+-        else:
+-            value = yaml_version, None
+-        if self.loader is not None and hasattr(self.loader, 'tags'):
+-            self.loader.version = yaml_version
+-            if self.loader.tags is None:
+-                self.loader.tags = {}
+-            for k in self.tag_handles:
+-                self.loader.tags[k] = self.tag_handles[k]
+-        for key in self.DEFAULT_TAGS:
+-            if key not in self.tag_handles:
+-                self.tag_handles[key] = self.DEFAULT_TAGS[key]
+-        return value
+-
+-    # block_node_or_indentless_sequence ::= ALIAS
+-    #               | properties (block_content | indentless_block_sequence)?
+-    #               | block_content
+-    #               | indentless_block_sequence
+-    # block_node    ::= ALIAS
+-    #                   | properties block_content?
+-    #                   | block_content
+-    # flow_node     ::= ALIAS
+-    #                   | properties flow_content?
+-    #                   | flow_content
+-    # properties    ::= TAG ANCHOR? | ANCHOR TAG?
+-    # block_content     ::= block_collection | flow_collection | SCALAR
+-    # flow_content      ::= flow_collection | SCALAR
+-    # block_collection  ::= block_sequence | block_mapping
+-    # flow_collection   ::= flow_sequence | flow_mapping
+-
+-    def parse_block_node(self):
+-        # type: () -> Any
+-        return self.parse_node(block=True)
+-
+-    def parse_flow_node(self):
+-        # type: () -> Any
+-        return self.parse_node()
+-
+-    def parse_block_node_or_indentless_sequence(self):
+-        # type: () -> Any
+-        return self.parse_node(block=True, indentless_sequence=True)
+-
+-    def transform_tag(self, handle, suffix):
+-        # type: (Any, Any) -> Any
+-        return self.tag_handles[handle] + suffix
+-
+-    def parse_node(self, block=False, indentless_sequence=False):
+-        # type: (bool, bool) -> Any
+-        if self.scanner.check_token(AliasToken):
+-            token = self.scanner.get_token()
+-            event = AliasEvent(token.value, token.start_mark, token.end_mark)  # type: Any
+-            self.state = self.states.pop()
+-            return event
+-
+-        anchor = None
+-        tag = None
+-        start_mark = end_mark = tag_mark = None
+-        if self.scanner.check_token(AnchorToken):
+-            token = self.scanner.get_token()
+-            start_mark = token.start_mark
+-            end_mark = token.end_mark
+-            anchor = token.value
+-            if self.scanner.check_token(TagToken):
+-                token = self.scanner.get_token()
+-                tag_mark = token.start_mark
+-                end_mark = token.end_mark
+-                tag = token.value
+-        elif self.scanner.check_token(TagToken):
+-            token = self.scanner.get_token()
+-            start_mark = tag_mark = token.start_mark
+-            end_mark = token.end_mark
+-            tag = token.value
+-            if self.scanner.check_token(AnchorToken):
+-                token = self.scanner.get_token()
+-                start_mark = tag_mark = token.start_mark
+-                end_mark = token.end_mark
+-                anchor = token.value
+-        if tag is not None:
+-            handle, suffix = tag
+-            if handle is not None:
+-                if handle not in self.tag_handles:
+-                    raise ParserError(
+-                        'while parsing a node',
+-                        start_mark,
+-                        'found undefined tag handle %r' % utf8(handle),
+-                        tag_mark,
+-                    )
+-                tag = self.transform_tag(handle, suffix)
+-            else:
+-                tag = suffix
+-        # if tag == u'!':
+-        #     raise ParserError("while parsing a node", start_mark,
+-        #             "found non-specific tag '!'", tag_mark,
+-        #      "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag'
+-        #     and share your opinion.")
+-        if start_mark is None:
+-            start_mark = end_mark = self.scanner.peek_token().start_mark
+-        event = None
+-        implicit = tag is None or tag == u'!'
+-        if indentless_sequence and self.scanner.check_token(BlockEntryToken):
+-            comment = None
+-            pt = self.scanner.peek_token()
+-            if pt.comment and pt.comment[0]:
+-                comment = [pt.comment[0], []]
+-                pt.comment[0] = None
+-            end_mark = self.scanner.peek_token().end_mark
+-            event = SequenceStartEvent(
+-                anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment
+-            )
+-            self.state = self.parse_indentless_sequence_entry
+-            return event
+-
+-        if self.scanner.check_token(ScalarToken):
+-            token = self.scanner.get_token()
+-            # self.scanner.peek_token_same_line_comment(token)
+-            end_mark = token.end_mark
+-            if (token.plain and tag is None) or tag == u'!':
+-                implicit = (True, False)
+-            elif tag is None:
+-                implicit = (False, True)
+-            else:
+-                implicit = (False, False)
+-            # nprint('se', token.value, token.comment)
+-            event = ScalarEvent(
+-                anchor,
+-                tag,
+-                implicit,
+-                token.value,
+-                start_mark,
+-                end_mark,
+-                style=token.style,
+-                comment=token.comment,
+-            )
+-            self.state = self.states.pop()
+-        elif self.scanner.check_token(FlowSequenceStartToken):
+-            pt = self.scanner.peek_token()
+-            end_mark = pt.end_mark
+-            event = SequenceStartEvent(
+-                anchor,
+-                tag,
+-                implicit,
+-                start_mark,
+-                end_mark,
+-                flow_style=True,
+-                comment=pt.comment,
+-            )
+-            self.state = self.parse_flow_sequence_first_entry
+-        elif self.scanner.check_token(FlowMappingStartToken):
+-            pt = self.scanner.peek_token()
+-            end_mark = pt.end_mark
+-            event = MappingStartEvent(
+-                anchor,
+-                tag,
+-                implicit,
+-                start_mark,
+-                end_mark,
+-                flow_style=True,
+-                comment=pt.comment,
+-            )
+-            self.state = self.parse_flow_mapping_first_key
+-        elif block and self.scanner.check_token(BlockSequenceStartToken):
+-            end_mark = self.scanner.peek_token().start_mark
+-            # should inserting the comment be dependent on the
+-            # indentation?
+-            pt = self.scanner.peek_token()
+-            comment = pt.comment
+-            # nprint('pt0', type(pt))
+-            if comment is None or comment[1] is None:
+-                comment = pt.split_comment()
+-            # nprint('pt1', comment)
+-            event = SequenceStartEvent(
+-                anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment
+-            )
+-            self.state = self.parse_block_sequence_first_entry
+-        elif block and self.scanner.check_token(BlockMappingStartToken):
+-            end_mark = self.scanner.peek_token().start_mark
+-            comment = self.scanner.peek_token().comment
+-            event = MappingStartEvent(
+-                anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment
+-            )
+-            self.state = self.parse_block_mapping_first_key
+-        elif anchor is not None or tag is not None:
+-            # Empty scalars are allowed even if a tag or an anchor is
+-            # specified.
+-            event = ScalarEvent(anchor, tag, (implicit, False), "", start_mark, end_mark)
+-            self.state = self.states.pop()
+-        else:
+-            if block:
+-                node = 'block'
+-            else:
+-                node = 'flow'
+-            token = self.scanner.peek_token()
+-            raise ParserError(
+-                'while parsing a %s node' % node,
+-                start_mark,
+-                'expected the node content, but found %r' % token.id,
+-                token.start_mark,
+-            )
+-        return event
+-
+-    # block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)*
+-    #                                                               BLOCK-END
+-
+-    def parse_block_sequence_first_entry(self):
+-        # type: () -> Any
+-        token = self.scanner.get_token()
+-        # move any comment from start token
+-        # token.move_comment(self.scanner.peek_token())
+-        self.marks.append(token.start_mark)
+-        return self.parse_block_sequence_entry()
+-
+-    def parse_block_sequence_entry(self):
+-        # type: () -> Any
+-        if self.scanner.check_token(BlockEntryToken):
+-            token = self.scanner.get_token()
+-            token.move_comment(self.scanner.peek_token())
+-            if not self.scanner.check_token(BlockEntryToken, BlockEndToken):
+-                self.states.append(self.parse_block_sequence_entry)
+-                return self.parse_block_node()
+-            else:
+-                self.state = self.parse_block_sequence_entry
+-                return self.process_empty_scalar(token.end_mark)
+-        if not self.scanner.check_token(BlockEndToken):
+-            token = self.scanner.peek_token()
+-            raise ParserError(
+-                'while parsing a block collection',
+-                self.marks[-1],
+-                'expected <block end>, but found %r' % token.id,
+-                token.start_mark,
+-            )
+-        token = self.scanner.get_token()  # BlockEndToken
+-        event = SequenceEndEvent(token.start_mark, token.end_mark, comment=token.comment)
+-        self.state = self.states.pop()
+-        self.marks.pop()
+-        return event
+-
+-    # indentless_sequence ::= (BLOCK-ENTRY block_node?)+
+-
+-    # indentless_sequence?
+-    # sequence:
+-    # - entry
+-    #  - nested
+-
+-    def parse_indentless_sequence_entry(self):
+-        # type: () -> Any
+-        if self.scanner.check_token(BlockEntryToken):
+-            token = self.scanner.get_token()
+-            token.move_comment(self.scanner.peek_token())
+-            if not self.scanner.check_token(
+-                BlockEntryToken, KeyToken, ValueToken, BlockEndToken
+-            ):
+-                self.states.append(self.parse_indentless_sequence_entry)
+-                return self.parse_block_node()
+-            else:
+-                self.state = self.parse_indentless_sequence_entry
+-                return self.process_empty_scalar(token.end_mark)
+-        token = self.scanner.peek_token()
+-        event = SequenceEndEvent(token.start_mark, token.start_mark, comment=token.comment)
+-        self.state = self.states.pop()
+-        return event
+-
+-    # block_mapping     ::= BLOCK-MAPPING_START
+-    #                       ((KEY block_node_or_indentless_sequence?)?
+-    #                       (VALUE block_node_or_indentless_sequence?)?)*
+-    #                       BLOCK-END
+-
+-    def parse_block_mapping_first_key(self):
+-        # type: () -> Any
+-        token = self.scanner.get_token()
+-        self.marks.append(token.start_mark)
+-        return self.parse_block_mapping_key()
+-
+-    def parse_block_mapping_key(self):
+-        # type: () -> Any
+-        if self.scanner.check_token(KeyToken):
+-            token = self.scanner.get_token()
+-            token.move_comment(self.scanner.peek_token())
+-            if not self.scanner.check_token(KeyToken, ValueToken, BlockEndToken):
+-                self.states.append(self.parse_block_mapping_value)
+-                return self.parse_block_node_or_indentless_sequence()
+-            else:
+-                self.state = self.parse_block_mapping_value
+-                return self.process_empty_scalar(token.end_mark)
+-        if self.resolver.processing_version > (1, 1) and self.scanner.check_token(ValueToken):
+-            self.state = self.parse_block_mapping_value
+-            return self.process_empty_scalar(self.scanner.peek_token().start_mark)
+-        if not self.scanner.check_token(BlockEndToken):
+-            token = self.scanner.peek_token()
+-            raise ParserError(
+-                'while parsing a block mapping',
+-                self.marks[-1],
+-                'expected <block end>, but found %r' % token.id,
+-                token.start_mark,
+-            )
+-        token = self.scanner.get_token()
+-        token.move_comment(self.scanner.peek_token())
+-        event = MappingEndEvent(token.start_mark, token.end_mark, comment=token.comment)
+-        self.state = self.states.pop()
+-        self.marks.pop()
+-        return event
+-
+-    def parse_block_mapping_value(self):
+-        # type: () -> Any
+-        if self.scanner.check_token(ValueToken):
+-            token = self.scanner.get_token()
+-            # value token might have post comment move it to e.g. block
+-            if self.scanner.check_token(ValueToken):
+-                token.move_comment(self.scanner.peek_token())
+-            else:
+-                if not self.scanner.check_token(KeyToken):
+-                    token.move_comment(self.scanner.peek_token(), empty=True)
+-                # else: empty value for this key cannot move token.comment
+-            if not self.scanner.check_token(KeyToken, ValueToken, BlockEndToken):
+-                self.states.append(self.parse_block_mapping_key)
+-                return self.parse_block_node_or_indentless_sequence()
+-            else:
+-                self.state = self.parse_block_mapping_key
+-                comment = token.comment
+-                if comment is None:
+-                    token = self.scanner.peek_token()
+-                    comment = token.comment
+-                    if comment:
+-                        token._comment = [None, comment[1]]
+-                        comment = [comment[0], None]
+-                return self.process_empty_scalar(token.end_mark, comment=comment)
+-        else:
+-            self.state = self.parse_block_mapping_key
+-            token = self.scanner.peek_token()
+-            return self.process_empty_scalar(token.start_mark)
+-
+-    # flow_sequence     ::= FLOW-SEQUENCE-START
+-    #                       (flow_sequence_entry FLOW-ENTRY)*
+-    #                       flow_sequence_entry?
+-    #                       FLOW-SEQUENCE-END
+-    # flow_sequence_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-    #
+-    # Note that while production rules for both flow_sequence_entry and
+-    # flow_mapping_entry are equal, their interpretations are different.
+-    # For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
+-    # generate an inline mapping (set syntax).
+-
+-    def parse_flow_sequence_first_entry(self):
+-        # type: () -> Any
+-        token = self.scanner.get_token()
+-        self.marks.append(token.start_mark)
+-        return self.parse_flow_sequence_entry(first=True)
+-
+-    def parse_flow_sequence_entry(self, first=False):
+-        # type: (bool) -> Any
+-        if not self.scanner.check_token(FlowSequenceEndToken):
+-            if not first:
+-                if self.scanner.check_token(FlowEntryToken):
+-                    self.scanner.get_token()
+-                else:
+-                    token = self.scanner.peek_token()
+-                    raise ParserError(
+-                        'while parsing a flow sequence',
+-                        self.marks[-1],
+-                        "expected ',' or ']', but got %r" % token.id,
+-                        token.start_mark,
+-                    )
+-
+-            if self.scanner.check_token(KeyToken):
+-                token = self.scanner.peek_token()
+-                event = MappingStartEvent(
+-                    None, None, True, token.start_mark, token.end_mark, flow_style=True
+-                )  # type: Any
+-                self.state = self.parse_flow_sequence_entry_mapping_key
+-                return event
+-            elif not self.scanner.check_token(FlowSequenceEndToken):
+-                self.states.append(self.parse_flow_sequence_entry)
+-                return self.parse_flow_node()
+-        token = self.scanner.get_token()
+-        event = SequenceEndEvent(token.start_mark, token.end_mark, comment=token.comment)
+-        self.state = self.states.pop()
+-        self.marks.pop()
+-        return event
+-
+-    def parse_flow_sequence_entry_mapping_key(self):
+-        # type: () -> Any
+-        token = self.scanner.get_token()
+-        if not self.scanner.check_token(ValueToken, FlowEntryToken, FlowSequenceEndToken):
+-            self.states.append(self.parse_flow_sequence_entry_mapping_value)
+-            return self.parse_flow_node()
+-        else:
+-            self.state = self.parse_flow_sequence_entry_mapping_value
+-            return self.process_empty_scalar(token.end_mark)
+-
+-    def parse_flow_sequence_entry_mapping_value(self):
+-        # type: () -> Any
+-        if self.scanner.check_token(ValueToken):
+-            token = self.scanner.get_token()
+-            if not self.scanner.check_token(FlowEntryToken, FlowSequenceEndToken):
+-                self.states.append(self.parse_flow_sequence_entry_mapping_end)
+-                return self.parse_flow_node()
+-            else:
+-                self.state = self.parse_flow_sequence_entry_mapping_end
+-                return self.process_empty_scalar(token.end_mark)
+-        else:
+-            self.state = self.parse_flow_sequence_entry_mapping_end
+-            token = self.scanner.peek_token()
+-            return self.process_empty_scalar(token.start_mark)
+-
+-    def parse_flow_sequence_entry_mapping_end(self):
+-        # type: () -> Any
+-        self.state = self.parse_flow_sequence_entry
+-        token = self.scanner.peek_token()
+-        return MappingEndEvent(token.start_mark, token.start_mark)
+-
+-    # flow_mapping  ::= FLOW-MAPPING-START
+-    #                   (flow_mapping_entry FLOW-ENTRY)*
+-    #                   flow_mapping_entry?
+-    #                   FLOW-MAPPING-END
+-    # flow_mapping_entry    ::= flow_node | KEY flow_node? (VALUE flow_node?)?
+-
+-    def parse_flow_mapping_first_key(self):
+-        # type: () -> Any
+-        token = self.scanner.get_token()
+-        self.marks.append(token.start_mark)
+-        return self.parse_flow_mapping_key(first=True)
+-
+-    def parse_flow_mapping_key(self, first=False):
+-        # type: (Any) -> Any
+-        if not self.scanner.check_token(FlowMappingEndToken):
+-            if not first:
+-                if self.scanner.check_token(FlowEntryToken):
+-                    self.scanner.get_token()
+-                else:
+-                    token = self.scanner.peek_token()
+-                    raise ParserError(
+-                        'while parsing a flow mapping',
+-                        self.marks[-1],
+-                        "expected ',' or '}', but got %r" % token.id,
+-                        token.start_mark,
+-                    )
+-            if self.scanner.check_token(KeyToken):
+-                token = self.scanner.get_token()
+-                if not self.scanner.check_token(
+-                    ValueToken, FlowEntryToken, FlowMappingEndToken
+-                ):
+-                    self.states.append(self.parse_flow_mapping_value)
+-                    return self.parse_flow_node()
+-                else:
+-                    self.state = self.parse_flow_mapping_value
+-                    return self.process_empty_scalar(token.end_mark)
+-            elif self.resolver.processing_version > (1, 1) and self.scanner.check_token(
+-                ValueToken
+-            ):
+-                self.state = self.parse_flow_mapping_value
+-                return self.process_empty_scalar(self.scanner.peek_token().end_mark)
+-            elif not self.scanner.check_token(FlowMappingEndToken):
+-                self.states.append(self.parse_flow_mapping_empty_value)
+-                return self.parse_flow_node()
+-        token = self.scanner.get_token()
+-        event = MappingEndEvent(token.start_mark, token.end_mark, comment=token.comment)
+-        self.state = self.states.pop()
+-        self.marks.pop()
+-        return event
+-
+-    def parse_flow_mapping_value(self):
+-        # type: () -> Any
+-        if self.scanner.check_token(ValueToken):
+-            token = self.scanner.get_token()
+-            if not self.scanner.check_token(FlowEntryToken, FlowMappingEndToken):
+-                self.states.append(self.parse_flow_mapping_key)
+-                return self.parse_flow_node()
+-            else:
+-                self.state = self.parse_flow_mapping_key
+-                return self.process_empty_scalar(token.end_mark)
+-        else:
+-            self.state = self.parse_flow_mapping_key
+-            token = self.scanner.peek_token()
+-            return self.process_empty_scalar(token.start_mark)
+-
+-    def parse_flow_mapping_empty_value(self):
+-        # type: () -> Any
+-        self.state = self.parse_flow_mapping_key
+-        return self.process_empty_scalar(self.scanner.peek_token().start_mark)
+-
+-    def process_empty_scalar(self, mark, comment=None):
+-        # type: (Any, Any) -> Any
+-        return ScalarEvent(None, None, (True, False), "", mark, mark, comment=comment)
+-
+-
+-class RoundTripParser(Parser):
+-    """roundtrip is a safe loader, that wants to see the unmangled tag"""
+-
+-    def transform_tag(self, handle, suffix):
+-        # type: (Any, Any) -> Any
+-        # return self.tag_handles[handle]+suffix
+-        if handle == '!!' and suffix in (
+-            u'null',
+-            u'bool',
+-            u'int',
+-            u'float',
+-            u'binary',
+-            u'timestamp',
+-            u'omap',
+-            u'pairs',
+-            u'set',
+-            u'str',
+-            u'seq',
+-            u'map',
+-        ):
+-            return Parser.transform_tag(self, handle, suffix)
+-        return handle + suffix
+diff --git a/dynaconf/vendor_src/ruamel/yaml/py.typed b/dynaconf/vendor_src/ruamel/yaml/py.typed
+deleted file mode 100644
+index e69de29..0000000
+diff --git a/dynaconf/vendor_src/ruamel/yaml/reader.py b/dynaconf/vendor_src/ruamel/yaml/reader.py
+deleted file mode 100644
+index 52ec9a9..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/reader.py
++++ /dev/null
+@@ -1,311 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import
+-
+-# This module contains abstractions for the input stream. You don't have to
+-# looks further, there are no pretty code.
+-#
+-# We define two classes here.
+-#
+-#   Mark(source, line, column)
+-# It's just a record and its only use is producing nice error messages.
+-# Parser does not use it for any other purposes.
+-#
+-#   Reader(source, data)
+-# Reader determines the encoding of `data` and converts it to unicode.
+-# Reader provides the following methods and attributes:
+-#   reader.peek(length=1) - return the next `length` characters
+-#   reader.forward(length=1) - move the current position to `length`
+-#      characters.
+-#   reader.index - the number of the current character.
+-#   reader.line, stream.column - the line and the column of the current
+-#      character.
+-
+-import codecs
+-
+-from .error import YAMLError, FileMark, StringMark, YAMLStreamError
+-from .compat import text_type, binary_type, PY3, UNICODE_SIZE
+-from .util import RegExp
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List, Union, Text, Tuple, Optional  # NOQA
+-#    from .compat import StreamTextType  # NOQA
+-
+-__all__ = ['Reader', 'ReaderError']
+-
+-
+-class ReaderError(YAMLError):
+-    def __init__(self, name, position, character, encoding, reason):
+-        # type: (Any, Any, Any, Any, Any) -> None
+-        self.name = name
+-        self.character = character
+-        self.position = position
+-        self.encoding = encoding
+-        self.reason = reason
+-
+-    def __str__(self):
+-        # type: () -> str
+-        if isinstance(self.character, binary_type):
+-            return "'%s' codec can't decode byte #x%02x: %s\n" '  in "%s", position %d' % (
+-                self.encoding,
+-                ord(self.character),
+-                self.reason,
+-                self.name,
+-                self.position,
+-            )
+-        else:
+-            return 'unacceptable character #x%04x: %s\n' '  in "%s", position %d' % (
+-                self.character,
+-                self.reason,
+-                self.name,
+-                self.position,
+-            )
+-
+-
+-class Reader(object):
+-    # Reader:
+-    # - determines the data encoding and converts it to a unicode string,
+-    # - checks if characters are in allowed range,
+-    # - adds '\0' to the end.
+-
+-    # Reader accepts
+-    #  - a `str` object (PY2) / a `bytes` object (PY3),
+-    #  - a `unicode` object (PY2) / a `str` object (PY3),
+-    #  - a file-like object with its `read` method returning `str`,
+-    #  - a file-like object with its `read` method returning `unicode`.
+-
+-    # Yeah, it's ugly and slow.
+-
+-    def __init__(self, stream, loader=None):
+-        # type: (Any, Any) -> None
+-        self.loader = loader
+-        if self.loader is not None and getattr(self.loader, '_reader', None) is None:
+-            self.loader._reader = self
+-        self.reset_reader()
+-        self.stream = stream  # type: Any  # as .read is called
+-
+-    def reset_reader(self):
+-        # type: () -> None
+-        self.name = None  # type: Any
+-        self.stream_pointer = 0
+-        self.eof = True
+-        self.buffer = ""
+-        self.pointer = 0
+-        self.raw_buffer = None  # type: Any
+-        self.raw_decode = None
+-        self.encoding = None  # type: Optional[Text]
+-        self.index = 0
+-        self.line = 0
+-        self.column = 0
+-
+-    @property
+-    def stream(self):
+-        # type: () -> Any
+-        try:
+-            return self._stream
+-        except AttributeError:
+-            raise YAMLStreamError('input stream needs to specified')
+-
+-    @stream.setter
+-    def stream(self, val):
+-        # type: (Any) -> None
+-        if val is None:
+-            return
+-        self._stream = None
+-        if isinstance(val, text_type):
+-            self.name = '<unicode string>'
+-            self.check_printable(val)
+-            self.buffer = val + u'\0'  # type: ignore
+-        elif isinstance(val, binary_type):
+-            self.name = '<byte string>'
+-            self.raw_buffer = val
+-            self.determine_encoding()
+-        else:
+-            if not hasattr(val, 'read'):
+-                raise YAMLStreamError('stream argument needs to have a read() method')
+-            self._stream = val
+-            self.name = getattr(self.stream, 'name', '<file>')
+-            self.eof = False
+-            self.raw_buffer = None
+-            self.determine_encoding()
+-
+-    def peek(self, index=0):
+-        # type: (int) -> Text
+-        try:
+-            return self.buffer[self.pointer + index]
+-        except IndexError:
+-            self.update(index + 1)
+-            return self.buffer[self.pointer + index]
+-
+-    def prefix(self, length=1):
+-        # type: (int) -> Any
+-        if self.pointer + length >= len(self.buffer):
+-            self.update(length)
+-        return self.buffer[self.pointer : self.pointer + length]
+-
+-    def forward_1_1(self, length=1):
+-        # type: (int) -> None
+-        if self.pointer + length + 1 >= len(self.buffer):
+-            self.update(length + 1)
+-        while length != 0:
+-            ch = self.buffer[self.pointer]
+-            self.pointer += 1
+-            self.index += 1
+-            if ch in u'\n\x85\u2028\u2029' or (
+-                ch == u'\r' and self.buffer[self.pointer] != u'\n'
+-            ):
+-                self.line += 1
+-                self.column = 0
+-            elif ch != u'\uFEFF':
+-                self.column += 1
+-            length -= 1
+-
+-    def forward(self, length=1):
+-        # type: (int) -> None
+-        if self.pointer + length + 1 >= len(self.buffer):
+-            self.update(length + 1)
+-        while length != 0:
+-            ch = self.buffer[self.pointer]
+-            self.pointer += 1
+-            self.index += 1
+-            if ch == u'\n' or (ch == u'\r' and self.buffer[self.pointer] != u'\n'):
+-                self.line += 1
+-                self.column = 0
+-            elif ch != u'\uFEFF':
+-                self.column += 1
+-            length -= 1
+-
+-    def get_mark(self):
+-        # type: () -> Any
+-        if self.stream is None:
+-            return StringMark(
+-                self.name, self.index, self.line, self.column, self.buffer, self.pointer
+-            )
+-        else:
+-            return FileMark(self.name, self.index, self.line, self.column)
+-
+-    def determine_encoding(self):
+-        # type: () -> None
+-        while not self.eof and (self.raw_buffer is None or len(self.raw_buffer) < 2):
+-            self.update_raw()
+-        if isinstance(self.raw_buffer, binary_type):
+-            if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
+-                self.raw_decode = codecs.utf_16_le_decode  # type: ignore
+-                self.encoding = 'utf-16-le'
+-            elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
+-                self.raw_decode = codecs.utf_16_be_decode  # type: ignore
+-                self.encoding = 'utf-16-be'
+-            else:
+-                self.raw_decode = codecs.utf_8_decode  # type: ignore
+-                self.encoding = 'utf-8'
+-        self.update(1)
+-
+-    if UNICODE_SIZE == 2:
+-        NON_PRINTABLE = RegExp(
+-            u'[^\x09\x0A\x0D\x20-\x7E\x85' u'\xA0-\uD7FF' u'\uE000-\uFFFD' u']'
+-        )
+-    else:
+-        NON_PRINTABLE = RegExp(
+-            u'[^\x09\x0A\x0D\x20-\x7E\x85'
+-            u'\xA0-\uD7FF'
+-            u'\uE000-\uFFFD'
+-            u'\U00010000-\U0010FFFF'
+-            u']'
+-        )
+-
+-    _printable_ascii = ('\x09\x0A\x0D' + "".join(map(chr, range(0x20, 0x7F)))).encode('ascii')
+-
+-    @classmethod
+-    def _get_non_printable_ascii(cls, data):  # type: ignore
+-        # type: (Text, bytes) -> Optional[Tuple[int, Text]]
+-        ascii_bytes = data.encode('ascii')
+-        non_printables = ascii_bytes.translate(None, cls._printable_ascii)  # type: ignore
+-        if not non_printables:
+-            return None
+-        non_printable = non_printables[:1]
+-        return ascii_bytes.index(non_printable), non_printable.decode('ascii')
+-
+-    @classmethod
+-    def _get_non_printable_regex(cls, data):
+-        # type: (Text) -> Optional[Tuple[int, Text]]
+-        match = cls.NON_PRINTABLE.search(data)
+-        if not bool(match):
+-            return None
+-        return match.start(), match.group()
+-
+-    @classmethod
+-    def _get_non_printable(cls, data):
+-        # type: (Text) -> Optional[Tuple[int, Text]]
+-        try:
+-            return cls._get_non_printable_ascii(data)  # type: ignore
+-        except UnicodeEncodeError:
+-            return cls._get_non_printable_regex(data)
+-
+-    def check_printable(self, data):
+-        # type: (Any) -> None
+-        non_printable_match = self._get_non_printable(data)
+-        if non_printable_match is not None:
+-            start, character = non_printable_match
+-            position = self.index + (len(self.buffer) - self.pointer) + start
+-            raise ReaderError(
+-                self.name,
+-                position,
+-                ord(character),
+-                'unicode',
+-                'special characters are not allowed',
+-            )
+-
+-    def update(self, length):
+-        # type: (int) -> None
+-        if self.raw_buffer is None:
+-            return
+-        self.buffer = self.buffer[self.pointer :]
+-        self.pointer = 0
+-        while len(self.buffer) < length:
+-            if not self.eof:
+-                self.update_raw()
+-            if self.raw_decode is not None:
+-                try:
+-                    data, converted = self.raw_decode(self.raw_buffer, 'strict', self.eof)
+-                except UnicodeDecodeError as exc:
+-                    if PY3:
+-                        character = self.raw_buffer[exc.start]
+-                    else:
+-                        character = exc.object[exc.start]
+-                    if self.stream is not None:
+-                        position = self.stream_pointer - len(self.raw_buffer) + exc.start
+-                    elif self.stream is not None:
+-                        position = self.stream_pointer - len(self.raw_buffer) + exc.start
+-                    else:
+-                        position = exc.start
+-                    raise ReaderError(self.name, position, character, exc.encoding, exc.reason)
+-            else:
+-                data = self.raw_buffer
+-                converted = len(data)
+-            self.check_printable(data)
+-            self.buffer += data
+-            self.raw_buffer = self.raw_buffer[converted:]
+-            if self.eof:
+-                self.buffer += '\0'
+-                self.raw_buffer = None
+-                break
+-
+-    def update_raw(self, size=None):
+-        # type: (Optional[int]) -> None
+-        if size is None:
+-            size = 4096 if PY3 else 1024
+-        data = self.stream.read(size)
+-        if self.raw_buffer is None:
+-            self.raw_buffer = data
+-        else:
+-            self.raw_buffer += data
+-        self.stream_pointer += len(data)
+-        if not data:
+-            self.eof = True
+-
+-
+-# try:
+-#     import psyco
+-#     psyco.bind(Reader)
+-# except ImportError:
+-#     pass
+diff --git a/dynaconf/vendor_src/ruamel/yaml/representer.py b/dynaconf/vendor_src/ruamel/yaml/representer.py
+deleted file mode 100644
+index 985c9b2..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/representer.py
++++ /dev/null
+@@ -1,1283 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function, absolute_import, division
+-
+-
+-from .error import *  # NOQA
+-from .nodes import *  # NOQA
+-from .compat import text_type, binary_type, to_unicode, PY2, PY3
+-from .compat import ordereddict  # type: ignore
+-from .compat import nprint, nprintf  # NOQA
+-from .scalarstring import (
+-    LiteralScalarString,
+-    FoldedScalarString,
+-    SingleQuotedScalarString,
+-    DoubleQuotedScalarString,
+-    PlainScalarString,
+-)
+-from .scalarint import ScalarInt, BinaryInt, OctalInt, HexInt, HexCapsInt
+-from .scalarfloat import ScalarFloat
+-from .scalarbool import ScalarBoolean
+-from .timestamp import TimeStamp
+-
+-import datetime
+-import sys
+-import types
+-
+-if PY3:
+-    import copyreg
+-    import base64
+-else:
+-    import copy_reg as copyreg  # type: ignore
+-
+-if False:  # MYPY
+-    from typing import Dict, List, Any, Union, Text, Optional  # NOQA
+-
+-# fmt: off
+-__all__ = ['BaseRepresenter', 'SafeRepresenter', 'Representer',
+-           'RepresenterError', 'RoundTripRepresenter']
+-# fmt: on
+-
+-
+-class RepresenterError(YAMLError):
+-    pass
+-
+-
+-if PY2:
+-
+-    def get_classobj_bases(cls):
+-        # type: (Any) -> Any
+-        bases = [cls]
+-        for base in cls.__bases__:
+-            bases.extend(get_classobj_bases(base))
+-        return bases
+-
+-
+-class BaseRepresenter(object):
+-
+-    yaml_representers = {}  # type: Dict[Any, Any]
+-    yaml_multi_representers = {}  # type: Dict[Any, Any]
+-
+-    def __init__(self, default_style=None, default_flow_style=None, dumper=None):
+-        # type: (Any, Any, Any, Any) -> None
+-        self.dumper = dumper
+-        if self.dumper is not None:
+-            self.dumper._representer = self
+-        self.default_style = default_style
+-        self.default_flow_style = default_flow_style
+-        self.represented_objects = {}  # type: Dict[Any, Any]
+-        self.object_keeper = []  # type: List[Any]
+-        self.alias_key = None  # type: Optional[int]
+-        self.sort_base_mapping_type_on_output = True
+-
+-    @property
+-    def serializer(self):
+-        # type: () -> Any
+-        try:
+-            if hasattr(self.dumper, 'typ'):
+-                return self.dumper.serializer
+-            return self.dumper._serializer
+-        except AttributeError:
+-            return self  # cyaml
+-
+-    def represent(self, data):
+-        # type: (Any) -> None
+-        node = self.represent_data(data)
+-        self.serializer.serialize(node)
+-        self.represented_objects = {}
+-        self.object_keeper = []
+-        self.alias_key = None
+-
+-    def represent_data(self, data):
+-        # type: (Any) -> Any
+-        if self.ignore_aliases(data):
+-            self.alias_key = None
+-        else:
+-            self.alias_key = id(data)
+-        if self.alias_key is not None:
+-            if self.alias_key in self.represented_objects:
+-                node = self.represented_objects[self.alias_key]
+-                # if node is None:
+-                #     raise RepresenterError(
+-                #          "recursive objects are not allowed: %r" % data)
+-                return node
+-            # self.represented_objects[alias_key] = None
+-            self.object_keeper.append(data)
+-        data_types = type(data).__mro__
+-        if PY2:
+-            # if type(data) is types.InstanceType:
+-            if isinstance(data, types.InstanceType):
+-                data_types = get_classobj_bases(data.__class__) + list(data_types)
+-        if data_types[0] in self.yaml_representers:
+-            node = self.yaml_representers[data_types[0]](self, data)
+-        else:
+-            for data_type in data_types:
+-                if data_type in self.yaml_multi_representers:
+-                    node = self.yaml_multi_representers[data_type](self, data)
+-                    break
+-            else:
+-                if None in self.yaml_multi_representers:
+-                    node = self.yaml_multi_representers[None](self, data)
+-                elif None in self.yaml_representers:
+-                    node = self.yaml_representers[None](self, data)
+-                else:
+-                    node = ScalarNode(None, text_type(data))
+-        # if alias_key is not None:
+-        #     self.represented_objects[alias_key] = node
+-        return node
+-
+-    def represent_key(self, data):
+-        # type: (Any) -> Any
+-        """
+-        David Fraser: Extract a method to represent keys in mappings, so that
+-        a subclass can choose not to quote them (for example)
+-        used in represent_mapping
+-        https://bitbucket.org/davidfraser/pyyaml/commits/d81df6eb95f20cac4a79eed95ae553b5c6f77b8c
+-        """
+-        return self.represent_data(data)
+-
+-    @classmethod
+-    def add_representer(cls, data_type, representer):
+-        # type: (Any, Any) -> None
+-        if 'yaml_representers' not in cls.__dict__:
+-            cls.yaml_representers = cls.yaml_representers.copy()
+-        cls.yaml_representers[data_type] = representer
+-
+-    @classmethod
+-    def add_multi_representer(cls, data_type, representer):
+-        # type: (Any, Any) -> None
+-        if 'yaml_multi_representers' not in cls.__dict__:
+-            cls.yaml_multi_representers = cls.yaml_multi_representers.copy()
+-        cls.yaml_multi_representers[data_type] = representer
+-
+-    def represent_scalar(self, tag, value, style=None, anchor=None):
+-        # type: (Any, Any, Any, Any) -> Any
+-        if style is None:
+-            style = self.default_style
+-        comment = None
+-        if style and style[0] in '|>':
+-            comment = getattr(value, 'comment', None)
+-            if comment:
+-                comment = [None, [comment]]
+-        node = ScalarNode(tag, value, style=style, comment=comment, anchor=anchor)
+-        if self.alias_key is not None:
+-            self.represented_objects[self.alias_key] = node
+-        return node
+-
+-    def represent_sequence(self, tag, sequence, flow_style=None):
+-        # type: (Any, Any, Any) -> Any
+-        value = []  # type: List[Any]
+-        node = SequenceNode(tag, value, flow_style=flow_style)
+-        if self.alias_key is not None:
+-            self.represented_objects[self.alias_key] = node
+-        best_style = True
+-        for item in sequence:
+-            node_item = self.represent_data(item)
+-            if not (isinstance(node_item, ScalarNode) and not node_item.style):
+-                best_style = False
+-            value.append(node_item)
+-        if flow_style is None:
+-            if self.default_flow_style is not None:
+-                node.flow_style = self.default_flow_style
+-            else:
+-                node.flow_style = best_style
+-        return node
+-
+-    def represent_omap(self, tag, omap, flow_style=None):
+-        # type: (Any, Any, Any) -> Any
+-        value = []  # type: List[Any]
+-        node = SequenceNode(tag, value, flow_style=flow_style)
+-        if self.alias_key is not None:
+-            self.represented_objects[self.alias_key] = node
+-        best_style = True
+-        for item_key in omap:
+-            item_val = omap[item_key]
+-            node_item = self.represent_data({item_key: item_val})
+-            # if not (isinstance(node_item, ScalarNode) \
+-            #    and not node_item.style):
+-            #     best_style = False
+-            value.append(node_item)
+-        if flow_style is None:
+-            if self.default_flow_style is not None:
+-                node.flow_style = self.default_flow_style
+-            else:
+-                node.flow_style = best_style
+-        return node
+-
+-    def represent_mapping(self, tag, mapping, flow_style=None):
+-        # type: (Any, Any, Any) -> Any
+-        value = []  # type: List[Any]
+-        node = MappingNode(tag, value, flow_style=flow_style)
+-        if self.alias_key is not None:
+-            self.represented_objects[self.alias_key] = node
+-        best_style = True
+-        if hasattr(mapping, 'items'):
+-            mapping = list(mapping.items())
+-            if self.sort_base_mapping_type_on_output:
+-                try:
+-                    mapping = sorted(mapping)
+-                except TypeError:
+-                    pass
+-        for item_key, item_value in mapping:
+-            node_key = self.represent_key(item_key)
+-            node_value = self.represent_data(item_value)
+-            if not (isinstance(node_key, ScalarNode) and not node_key.style):
+-                best_style = False
+-            if not (isinstance(node_value, ScalarNode) and not node_value.style):
+-                best_style = False
+-            value.append((node_key, node_value))
+-        if flow_style is None:
+-            if self.default_flow_style is not None:
+-                node.flow_style = self.default_flow_style
+-            else:
+-                node.flow_style = best_style
+-        return node
+-
+-    def ignore_aliases(self, data):
+-        # type: (Any) -> bool
+-        return False
+-
+-
+-class SafeRepresenter(BaseRepresenter):
+-    def ignore_aliases(self, data):
+-        # type: (Any) -> bool
+-        # https://docs.python.org/3/reference/expressions.html#parenthesized-forms :
+-        # "i.e. two occurrences of the empty tuple may or may not yield the same object"
+-        # so "data is ()" should not be used
+-        if data is None or (isinstance(data, tuple) and data == ()):
+-            return True
+-        if isinstance(data, (binary_type, text_type, bool, int, float)):
+-            return True
+-        return False
+-
+-    def represent_none(self, data):
+-        # type: (Any) -> Any
+-        return self.represent_scalar(u'tag:yaml.org,2002:null', u'null')
+-
+-    if PY3:
+-
+-        def represent_str(self, data):
+-            # type: (Any) -> Any
+-            return self.represent_scalar(u'tag:yaml.org,2002:str', data)
+-
+-        def represent_binary(self, data):
+-            # type: (Any) -> Any
+-            if hasattr(base64, 'encodebytes'):
+-                data = base64.encodebytes(data).decode('ascii')
+-            else:
+-                data = base64.encodestring(data).decode('ascii')
+-            return self.represent_scalar(u'tag:yaml.org,2002:binary', data, style='|')
+-
+-    else:
+-
+-        def represent_str(self, data):
+-            # type: (Any) -> Any
+-            tag = None
+-            style = None
+-            try:
+-                data = unicode(data, 'ascii')
+-                tag = u'tag:yaml.org,2002:str'
+-            except UnicodeDecodeError:
+-                try:
+-                    data = unicode(data, 'utf-8')
+-                    tag = u'tag:yaml.org,2002:str'
+-                except UnicodeDecodeError:
+-                    data = data.encode('base64')
+-                    tag = u'tag:yaml.org,2002:binary'
+-                    style = '|'
+-            return self.represent_scalar(tag, data, style=style)
+-
+-        def represent_unicode(self, data):
+-            # type: (Any) -> Any
+-            return self.represent_scalar(u'tag:yaml.org,2002:str', data)
+-
+-    def represent_bool(self, data, anchor=None):
+-        # type: (Any, Optional[Any]) -> Any
+-        try:
+-            value = self.dumper.boolean_representation[bool(data)]
+-        except AttributeError:
+-            if data:
+-                value = u'true'
+-            else:
+-                value = u'false'
+-        return self.represent_scalar(u'tag:yaml.org,2002:bool', value, anchor=anchor)
+-
+-    def represent_int(self, data):
+-        # type: (Any) -> Any
+-        return self.represent_scalar(u'tag:yaml.org,2002:int', text_type(data))
+-
+-    if PY2:
+-
+-        def represent_long(self, data):
+-            # type: (Any) -> Any
+-            return self.represent_scalar(u'tag:yaml.org,2002:int', text_type(data))
+-
+-    inf_value = 1e300
+-    while repr(inf_value) != repr(inf_value * inf_value):
+-        inf_value *= inf_value
+-
+-    def represent_float(self, data):
+-        # type: (Any) -> Any
+-        if data != data or (data == 0.0 and data == 1.0):
+-            value = u'.nan'
+-        elif data == self.inf_value:
+-            value = u'.inf'
+-        elif data == -self.inf_value:
+-            value = u'-.inf'
+-        else:
+-            value = to_unicode(repr(data)).lower()
+-            if getattr(self.serializer, 'use_version', None) == (1, 1):
+-                if u'.' not in value and u'e' in value:
+-                    # Note that in some cases `repr(data)` represents a float number
+-                    # without the decimal parts.  For instance:
+-                    #   >>> repr(1e17)
+-                    #   '1e17'
+-                    # Unfortunately, this is not a valid float representation according
+-                    # to the definition of the `!!float` tag in YAML 1.1.  We fix
+-                    # this by adding '.0' before the 'e' symbol.
+-                    value = value.replace(u'e', u'.0e', 1)
+-        return self.represent_scalar(u'tag:yaml.org,2002:float', value)
+-
+-    def represent_list(self, data):
+-        # type: (Any) -> Any
+-        # pairs = (len(data) > 0 and isinstance(data, list))
+-        # if pairs:
+-        #     for item in data:
+-        #         if not isinstance(item, tuple) or len(item) != 2:
+-        #             pairs = False
+-        #             break
+-        # if not pairs:
+-        return self.represent_sequence(u'tag:yaml.org,2002:seq', data)
+-
+-    # value = []
+-    # for item_key, item_value in data:
+-    #     value.append(self.represent_mapping(u'tag:yaml.org,2002:map',
+-    #         [(item_key, item_value)]))
+-    # return SequenceNode(u'tag:yaml.org,2002:pairs', value)
+-
+-    def represent_dict(self, data):
+-        # type: (Any) -> Any
+-        return self.represent_mapping(u'tag:yaml.org,2002:map', data)
+-
+-    def represent_ordereddict(self, data):
+-        # type: (Any) -> Any
+-        return self.represent_omap(u'tag:yaml.org,2002:omap', data)
+-
+-    def represent_set(self, data):
+-        # type: (Any) -> Any
+-        value = {}  # type: Dict[Any, None]
+-        for key in data:
+-            value[key] = None
+-        return self.represent_mapping(u'tag:yaml.org,2002:set', value)
+-
+-    def represent_date(self, data):
+-        # type: (Any) -> Any
+-        value = to_unicode(data.isoformat())
+-        return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
+-
+-    def represent_datetime(self, data):
+-        # type: (Any) -> Any
+-        value = to_unicode(data.isoformat(' '))
+-        return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
+-
+-    def represent_yaml_object(self, tag, data, cls, flow_style=None):
+-        # type: (Any, Any, Any, Any) -> Any
+-        if hasattr(data, '__getstate__'):
+-            state = data.__getstate__()
+-        else:
+-            state = data.__dict__.copy()
+-        return self.represent_mapping(tag, state, flow_style=flow_style)
+-
+-    def represent_undefined(self, data):
+-        # type: (Any) -> None
+-        raise RepresenterError('cannot represent an object: %s' % (data,))
+-
+-
+-SafeRepresenter.add_representer(type(None), SafeRepresenter.represent_none)
+-
+-SafeRepresenter.add_representer(str, SafeRepresenter.represent_str)
+-
+-if PY2:
+-    SafeRepresenter.add_representer(unicode, SafeRepresenter.represent_unicode)
+-else:
+-    SafeRepresenter.add_representer(bytes, SafeRepresenter.represent_binary)
+-
+-SafeRepresenter.add_representer(bool, SafeRepresenter.represent_bool)
+-
+-SafeRepresenter.add_representer(int, SafeRepresenter.represent_int)
+-
+-if PY2:
+-    SafeRepresenter.add_representer(long, SafeRepresenter.represent_long)
+-
+-SafeRepresenter.add_representer(float, SafeRepresenter.represent_float)
+-
+-SafeRepresenter.add_representer(list, SafeRepresenter.represent_list)
+-
+-SafeRepresenter.add_representer(tuple, SafeRepresenter.represent_list)
+-
+-SafeRepresenter.add_representer(dict, SafeRepresenter.represent_dict)
+-
+-SafeRepresenter.add_representer(set, SafeRepresenter.represent_set)
+-
+-SafeRepresenter.add_representer(ordereddict, SafeRepresenter.represent_ordereddict)
+-
+-if sys.version_info >= (2, 7):
+-    import collections
+-
+-    SafeRepresenter.add_representer(
+-        collections.OrderedDict, SafeRepresenter.represent_ordereddict
+-    )
+-
+-SafeRepresenter.add_representer(datetime.date, SafeRepresenter.represent_date)
+-
+-SafeRepresenter.add_representer(datetime.datetime, SafeRepresenter.represent_datetime)
+-
+-SafeRepresenter.add_representer(None, SafeRepresenter.represent_undefined)
+-
+-
+-class Representer(SafeRepresenter):
+-    if PY2:
+-
+-        def represent_str(self, data):
+-            # type: (Any) -> Any
+-            tag = None
+-            style = None
+-            try:
+-                data = unicode(data, 'ascii')
+-                tag = u'tag:yaml.org,2002:str'
+-            except UnicodeDecodeError:
+-                try:
+-                    data = unicode(data, 'utf-8')
+-                    tag = u'tag:yaml.org,2002:python/str'
+-                except UnicodeDecodeError:
+-                    data = data.encode('base64')
+-                    tag = u'tag:yaml.org,2002:binary'
+-                    style = '|'
+-            return self.represent_scalar(tag, data, style=style)
+-
+-        def represent_unicode(self, data):
+-            # type: (Any) -> Any
+-            tag = None
+-            try:
+-                data.encode('ascii')
+-                tag = u'tag:yaml.org,2002:python/unicode'
+-            except UnicodeEncodeError:
+-                tag = u'tag:yaml.org,2002:str'
+-            return self.represent_scalar(tag, data)
+-
+-        def represent_long(self, data):
+-            # type: (Any) -> Any
+-            tag = u'tag:yaml.org,2002:int'
+-            if int(data) is not data:
+-                tag = u'tag:yaml.org,2002:python/long'
+-            return self.represent_scalar(tag, to_unicode(data))
+-
+-    def represent_complex(self, data):
+-        # type: (Any) -> Any
+-        if data.imag == 0.0:
+-            data = u'%r' % data.real
+-        elif data.real == 0.0:
+-            data = u'%rj' % data.imag
+-        elif data.imag > 0:
+-            data = u'%r+%rj' % (data.real, data.imag)
+-        else:
+-            data = u'%r%rj' % (data.real, data.imag)
+-        return self.represent_scalar(u'tag:yaml.org,2002:python/complex', data)
+-
+-    def represent_tuple(self, data):
+-        # type: (Any) -> Any
+-        return self.represent_sequence(u'tag:yaml.org,2002:python/tuple', data)
+-
+-    def represent_name(self, data):
+-        # type: (Any) -> Any
+-        try:
+-            name = u'%s.%s' % (data.__module__, data.__qualname__)
+-        except AttributeError:
+-            # probably PY2
+-            name = u'%s.%s' % (data.__module__, data.__name__)
+-        return self.represent_scalar(u'tag:yaml.org,2002:python/name:' + name, "")
+-
+-    def represent_module(self, data):
+-        # type: (Any) -> Any
+-        return self.represent_scalar(u'tag:yaml.org,2002:python/module:' + data.__name__, "")
+-
+-    if PY2:
+-
+-        def represent_instance(self, data):
+-            # type: (Any) -> Any
+-            # For instances of classic classes, we use __getinitargs__ and
+-            # __getstate__ to serialize the data.
+-
+-            # If data.__getinitargs__ exists, the object must be reconstructed
+-            # by calling cls(**args), where args is a tuple returned by
+-            # __getinitargs__. Otherwise, the cls.__init__ method should never
+-            # be called and the class instance is created by instantiating a
+-            # trivial class and assigning to the instance's __class__ variable.
+-
+-            # If data.__getstate__ exists, it returns the state of the object.
+-            # Otherwise, the state of the object is data.__dict__.
+-
+-            # We produce either a !!python/object or !!python/object/new node.
+-            # If data.__getinitargs__ does not exist and state is a dictionary,
+-            # we produce a !!python/object node . Otherwise we produce a
+-            # !!python/object/new node.
+-
+-            cls = data.__class__
+-            class_name = u'%s.%s' % (cls.__module__, cls.__name__)
+-            args = None
+-            state = None
+-            if hasattr(data, '__getinitargs__'):
+-                args = list(data.__getinitargs__())
+-            if hasattr(data, '__getstate__'):
+-                state = data.__getstate__()
+-            else:
+-                state = data.__dict__
+-            if args is None and isinstance(state, dict):
+-                return self.represent_mapping(
+-                    u'tag:yaml.org,2002:python/object:' + class_name, state
+-                )
+-            if isinstance(state, dict) and not state:
+-                return self.represent_sequence(
+-                    u'tag:yaml.org,2002:python/object/new:' + class_name, args
+-                )
+-            value = {}
+-            if bool(args):
+-                value['args'] = args
+-            value['state'] = state  # type: ignore
+-            return self.represent_mapping(
+-                u'tag:yaml.org,2002:python/object/new:' + class_name, value
+-            )
+-
+-    def represent_object(self, data):
+-        # type: (Any) -> Any
+-        # We use __reduce__ API to save the data. data.__reduce__ returns
+-        # a tuple of length 2-5:
+-        #   (function, args, state, listitems, dictitems)
+-
+-        # For reconstructing, we calls function(*args), then set its state,
+-        # listitems, and dictitems if they are not None.
+-
+-        # A special case is when function.__name__ == '__newobj__'. In this
+-        # case we create the object with args[0].__new__(*args).
+-
+-        # Another special case is when __reduce__ returns a string - we don't
+-        # support it.
+-
+-        # We produce a !!python/object, !!python/object/new or
+-        # !!python/object/apply node.
+-
+-        cls = type(data)
+-        if cls in copyreg.dispatch_table:
+-            reduce = copyreg.dispatch_table[cls](data)
+-        elif hasattr(data, '__reduce_ex__'):
+-            reduce = data.__reduce_ex__(2)
+-        elif hasattr(data, '__reduce__'):
+-            reduce = data.__reduce__()
+-        else:
+-            raise RepresenterError('cannot represent object: %r' % (data,))
+-        reduce = (list(reduce) + [None] * 5)[:5]
+-        function, args, state, listitems, dictitems = reduce
+-        args = list(args)
+-        if state is None:
+-            state = {}
+-        if listitems is not None:
+-            listitems = list(listitems)
+-        if dictitems is not None:
+-            dictitems = dict(dictitems)
+-        if function.__name__ == '__newobj__':
+-            function = args[0]
+-            args = args[1:]
+-            tag = u'tag:yaml.org,2002:python/object/new:'
+-            newobj = True
+-        else:
+-            tag = u'tag:yaml.org,2002:python/object/apply:'
+-            newobj = False
+-        try:
+-            function_name = u'%s.%s' % (function.__module__, function.__qualname__)
+-        except AttributeError:
+-            # probably PY2
+-            function_name = u'%s.%s' % (function.__module__, function.__name__)
+-        if not args and not listitems and not dictitems and isinstance(state, dict) and newobj:
+-            return self.represent_mapping(
+-                u'tag:yaml.org,2002:python/object:' + function_name, state
+-            )
+-        if not listitems and not dictitems and isinstance(state, dict) and not state:
+-            return self.represent_sequence(tag + function_name, args)
+-        value = {}
+-        if args:
+-            value['args'] = args
+-        if state or not isinstance(state, dict):
+-            value['state'] = state
+-        if listitems:
+-            value['listitems'] = listitems
+-        if dictitems:
+-            value['dictitems'] = dictitems
+-        return self.represent_mapping(tag + function_name, value)
+-
+-
+-if PY2:
+-    Representer.add_representer(str, Representer.represent_str)
+-
+-    Representer.add_representer(unicode, Representer.represent_unicode)
+-
+-    Representer.add_representer(long, Representer.represent_long)
+-
+-Representer.add_representer(complex, Representer.represent_complex)
+-
+-Representer.add_representer(tuple, Representer.represent_tuple)
+-
+-Representer.add_representer(type, Representer.represent_name)
+-
+-if PY2:
+-    Representer.add_representer(types.ClassType, Representer.represent_name)
+-
+-Representer.add_representer(types.FunctionType, Representer.represent_name)
+-
+-Representer.add_representer(types.BuiltinFunctionType, Representer.represent_name)
+-
+-Representer.add_representer(types.ModuleType, Representer.represent_module)
+-
+-if PY2:
+-    Representer.add_multi_representer(types.InstanceType, Representer.represent_instance)
+-
+-Representer.add_multi_representer(object, Representer.represent_object)
+-
+-Representer.add_multi_representer(type, Representer.represent_name)
+-
+-from .comments import (
+-    CommentedMap,
+-    CommentedOrderedMap,
+-    CommentedSeq,
+-    CommentedKeySeq,
+-    CommentedKeyMap,
+-    CommentedSet,
+-    comment_attrib,
+-    merge_attrib,
+-    TaggedScalar,
+-)  # NOQA
+-
+-
+-class RoundTripRepresenter(SafeRepresenter):
+-    # need to add type here and write out the .comment
+-    # in serializer and emitter
+-
+-    def __init__(self, default_style=None, default_flow_style=None, dumper=None):
+-        # type: (Any, Any, Any) -> None
+-        if not hasattr(dumper, 'typ') and default_flow_style is None:
+-            default_flow_style = False
+-        SafeRepresenter.__init__(
+-            self,
+-            default_style=default_style,
+-            default_flow_style=default_flow_style,
+-            dumper=dumper,
+-        )
+-
+-    def ignore_aliases(self, data):
+-        # type: (Any) -> bool
+-        try:
+-            if data.anchor is not None and data.anchor.value is not None:
+-                return False
+-        except AttributeError:
+-            pass
+-        return SafeRepresenter.ignore_aliases(self, data)
+-
+-    def represent_none(self, data):
+-        # type: (Any) -> Any
+-        if len(self.represented_objects) == 0 and not self.serializer.use_explicit_start:
+-            # this will be open ended (although it is not yet)
+-            return self.represent_scalar(u'tag:yaml.org,2002:null', u'null')
+-        return self.represent_scalar(u'tag:yaml.org,2002:null', "")
+-
+-    def represent_literal_scalarstring(self, data):
+-        # type: (Any) -> Any
+-        tag = None
+-        style = '|'
+-        anchor = data.yaml_anchor(any=True)
+-        if PY2 and not isinstance(data, unicode):
+-            data = unicode(data, 'ascii')
+-        tag = u'tag:yaml.org,2002:str'
+-        return self.represent_scalar(tag, data, style=style, anchor=anchor)
+-
+-    represent_preserved_scalarstring = represent_literal_scalarstring
+-
+-    def represent_folded_scalarstring(self, data):
+-        # type: (Any) -> Any
+-        tag = None
+-        style = '>'
+-        anchor = data.yaml_anchor(any=True)
+-        for fold_pos in reversed(getattr(data, 'fold_pos', [])):
+-            if (
+-                data[fold_pos] == ' '
+-                and (fold_pos > 0 and not data[fold_pos - 1].isspace())
+-                and (fold_pos < len(data) and not data[fold_pos + 1].isspace())
+-            ):
+-                data = data[:fold_pos] + '\a' + data[fold_pos:]
+-        if PY2 and not isinstance(data, unicode):
+-            data = unicode(data, 'ascii')
+-        tag = u'tag:yaml.org,2002:str'
+-        return self.represent_scalar(tag, data, style=style, anchor=anchor)
+-
+-    def represent_single_quoted_scalarstring(self, data):
+-        # type: (Any) -> Any
+-        tag = None
+-        style = "'"
+-        anchor = data.yaml_anchor(any=True)
+-        if PY2 and not isinstance(data, unicode):
+-            data = unicode(data, 'ascii')
+-        tag = u'tag:yaml.org,2002:str'
+-        return self.represent_scalar(tag, data, style=style, anchor=anchor)
+-
+-    def represent_double_quoted_scalarstring(self, data):
+-        # type: (Any) -> Any
+-        tag = None
+-        style = '"'
+-        anchor = data.yaml_anchor(any=True)
+-        if PY2 and not isinstance(data, unicode):
+-            data = unicode(data, 'ascii')
+-        tag = u'tag:yaml.org,2002:str'
+-        return self.represent_scalar(tag, data, style=style, anchor=anchor)
+-
+-    def represent_plain_scalarstring(self, data):
+-        # type: (Any) -> Any
+-        tag = None
+-        style = ''
+-        anchor = data.yaml_anchor(any=True)
+-        if PY2 and not isinstance(data, unicode):
+-            data = unicode(data, 'ascii')
+-        tag = u'tag:yaml.org,2002:str'
+-        return self.represent_scalar(tag, data, style=style, anchor=anchor)
+-
+-    def insert_underscore(self, prefix, s, underscore, anchor=None):
+-        # type: (Any, Any, Any, Any) -> Any
+-        if underscore is None:
+-            return self.represent_scalar(u'tag:yaml.org,2002:int', prefix + s, anchor=anchor)
+-        if underscore[0]:
+-            sl = list(s)
+-            pos = len(s) - underscore[0]
+-            while pos > 0:
+-                sl.insert(pos, '_')
+-                pos -= underscore[0]
+-            s = "".join(sl)
+-        if underscore[1]:
+-            s = '_' + s
+-        if underscore[2]:
+-            s += '_'
+-        return self.represent_scalar(u'tag:yaml.org,2002:int', prefix + s, anchor=anchor)
+-
+-    def represent_scalar_int(self, data):
+-        # type: (Any) -> Any
+-        if data._width is not None:
+-            s = '{:0{}d}'.format(data, data._width)
+-        else:
+-            s = format(data, 'd')
+-        anchor = data.yaml_anchor(any=True)
+-        return self.insert_underscore("", s, data._underscore, anchor=anchor)
+-
+-    def represent_binary_int(self, data):
+-        # type: (Any) -> Any
+-        if data._width is not None:
+-            # cannot use '{:#0{}b}', that strips the zeros
+-            s = '{:0{}b}'.format(data, data._width)
+-        else:
+-            s = format(data, 'b')
+-        anchor = data.yaml_anchor(any=True)
+-        return self.insert_underscore('0b', s, data._underscore, anchor=anchor)
+-
+-    def represent_octal_int(self, data):
+-        # type: (Any) -> Any
+-        if data._width is not None:
+-            # cannot use '{:#0{}o}', that strips the zeros
+-            s = '{:0{}o}'.format(data, data._width)
+-        else:
+-            s = format(data, 'o')
+-        anchor = data.yaml_anchor(any=True)
+-        return self.insert_underscore('0o', s, data._underscore, anchor=anchor)
+-
+-    def represent_hex_int(self, data):
+-        # type: (Any) -> Any
+-        if data._width is not None:
+-            # cannot use '{:#0{}x}', that strips the zeros
+-            s = '{:0{}x}'.format(data, data._width)
+-        else:
+-            s = format(data, 'x')
+-        anchor = data.yaml_anchor(any=True)
+-        return self.insert_underscore('0x', s, data._underscore, anchor=anchor)
+-
+-    def represent_hex_caps_int(self, data):
+-        # type: (Any) -> Any
+-        if data._width is not None:
+-            # cannot use '{:#0{}X}', that strips the zeros
+-            s = '{:0{}X}'.format(data, data._width)
+-        else:
+-            s = format(data, 'X')
+-        anchor = data.yaml_anchor(any=True)
+-        return self.insert_underscore('0x', s, data._underscore, anchor=anchor)
+-
+-    def represent_scalar_float(self, data):
+-        # type: (Any) -> Any
+-        """ this is way more complicated """
+-        value = None
+-        anchor = data.yaml_anchor(any=True)
+-        if data != data or (data == 0.0 and data == 1.0):
+-            value = u'.nan'
+-        elif data == self.inf_value:
+-            value = u'.inf'
+-        elif data == -self.inf_value:
+-            value = u'-.inf'
+-        if value:
+-            return self.represent_scalar(u'tag:yaml.org,2002:float', value, anchor=anchor)
+-        if data._exp is None and data._prec > 0 and data._prec == data._width - 1:
+-            # no exponent, but trailing dot
+-            value = u'{}{:d}.'.format(data._m_sign if data._m_sign else "", abs(int(data)))
+-        elif data._exp is None:
+-            # no exponent, "normal" dot
+-            prec = data._prec
+-            ms = data._m_sign if data._m_sign else ""
+-            # -1 for the dot
+-            value = u'{}{:0{}.{}f}'.format(
+-                ms, abs(data), data._width - len(ms), data._width - prec - 1
+-            )
+-            if prec == 0 or (prec == 1 and ms != ""):
+-                value = value.replace(u'0.', u'.')
+-            while len(value) < data._width:
+-                value += u'0'
+-        else:
+-            # exponent
+-            m, es = u'{:{}.{}e}'.format(
+-                # data, data._width, data._width - data._prec + (1 if data._m_sign else 0)
+-                data,
+-                data._width,
+-                data._width + (1 if data._m_sign else 0),
+-            ).split('e')
+-            w = data._width if data._prec > 0 else (data._width + 1)
+-            if data < 0:
+-                w += 1
+-            m = m[:w]
+-            e = int(es)
+-            m1, m2 = m.split('.')  # always second?
+-            while len(m1) + len(m2) < data._width - (1 if data._prec >= 0 else 0):
+-                m2 += u'0'
+-            if data._m_sign and data > 0:
+-                m1 = '+' + m1
+-            esgn = u'+' if data._e_sign else ""
+-            if data._prec < 0:  # mantissa without dot
+-                if m2 != u'0':
+-                    e -= len(m2)
+-                else:
+-                    m2 = ""
+-                while (len(m1) + len(m2) - (1 if data._m_sign else 0)) < data._width:
+-                    m2 += u'0'
+-                    e -= 1
+-                value = m1 + m2 + data._exp + u'{:{}0{}d}'.format(e, esgn, data._e_width)
+-            elif data._prec == 0:  # mantissa with trailing dot
+-                e -= len(m2)
+-                value = (
+-                    m1 + m2 + u'.' + data._exp + u'{:{}0{}d}'.format(e, esgn, data._e_width)
+-                )
+-            else:
+-                if data._m_lead0 > 0:
+-                    m2 = u'0' * (data._m_lead0 - 1) + m1 + m2
+-                    m1 = u'0'
+-                    m2 = m2[: -data._m_lead0]  # these should be zeros
+-                    e += data._m_lead0
+-                while len(m1) < data._prec:
+-                    m1 += m2[0]
+-                    m2 = m2[1:]
+-                    e -= 1
+-                value = (
+-                    m1 + u'.' + m2 + data._exp + u'{:{}0{}d}'.format(e, esgn, data._e_width)
+-                )
+-
+-        if value is None:
+-            value = to_unicode(repr(data)).lower()
+-        return self.represent_scalar(u'tag:yaml.org,2002:float', value, anchor=anchor)
+-
+-    def represent_sequence(self, tag, sequence, flow_style=None):
+-        # type: (Any, Any, Any) -> Any
+-        value = []  # type: List[Any]
+-        # if the flow_style is None, the flow style tacked on to the object
+-        # explicitly will be taken. If that is None as well the default flow
+-        # style rules
+-        try:
+-            flow_style = sequence.fa.flow_style(flow_style)
+-        except AttributeError:
+-            flow_style = flow_style
+-        try:
+-            anchor = sequence.yaml_anchor()
+-        except AttributeError:
+-            anchor = None
+-        node = SequenceNode(tag, value, flow_style=flow_style, anchor=anchor)
+-        if self.alias_key is not None:
+-            self.represented_objects[self.alias_key] = node
+-        best_style = True
+-        try:
+-            comment = getattr(sequence, comment_attrib)
+-            node.comment = comment.comment
+-            # reset any comment already printed information
+-            if node.comment and node.comment[1]:
+-                for ct in node.comment[1]:
+-                    ct.reset()
+-            item_comments = comment.items
+-            for v in item_comments.values():
+-                if v and v[1]:
+-                    for ct in v[1]:
+-                        ct.reset()
+-            item_comments = comment.items
+-            node.comment = comment.comment
+-            try:
+-                node.comment.append(comment.end)
+-            except AttributeError:
+-                pass
+-        except AttributeError:
+-            item_comments = {}
+-        for idx, item in enumerate(sequence):
+-            node_item = self.represent_data(item)
+-            self.merge_comments(node_item, item_comments.get(idx))
+-            if not (isinstance(node_item, ScalarNode) and not node_item.style):
+-                best_style = False
+-            value.append(node_item)
+-        if flow_style is None:
+-            if len(sequence) != 0 and self.default_flow_style is not None:
+-                node.flow_style = self.default_flow_style
+-            else:
+-                node.flow_style = best_style
+-        return node
+-
+-    def merge_comments(self, node, comments):
+-        # type: (Any, Any) -> Any
+-        if comments is None:
+-            assert hasattr(node, 'comment')
+-            return node
+-        if getattr(node, 'comment', None) is not None:
+-            for idx, val in enumerate(comments):
+-                if idx >= len(node.comment):
+-                    continue
+-                nc = node.comment[idx]
+-                if nc is not None:
+-                    assert val is None or val == nc
+-                    comments[idx] = nc
+-        node.comment = comments
+-        return node
+-
+-    def represent_key(self, data):
+-        # type: (Any) -> Any
+-        if isinstance(data, CommentedKeySeq):
+-            self.alias_key = None
+-            return self.represent_sequence(u'tag:yaml.org,2002:seq', data, flow_style=True)
+-        if isinstance(data, CommentedKeyMap):
+-            self.alias_key = None
+-            return self.represent_mapping(u'tag:yaml.org,2002:map', data, flow_style=True)
+-        return SafeRepresenter.represent_key(self, data)
+-
+-    def represent_mapping(self, tag, mapping, flow_style=None):
+-        # type: (Any, Any, Any) -> Any
+-        value = []  # type: List[Any]
+-        try:
+-            flow_style = mapping.fa.flow_style(flow_style)
+-        except AttributeError:
+-            flow_style = flow_style
+-        try:
+-            anchor = mapping.yaml_anchor()
+-        except AttributeError:
+-            anchor = None
+-        node = MappingNode(tag, value, flow_style=flow_style, anchor=anchor)
+-        if self.alias_key is not None:
+-            self.represented_objects[self.alias_key] = node
+-        best_style = True
+-        # no sorting! !!
+-        try:
+-            comment = getattr(mapping, comment_attrib)
+-            node.comment = comment.comment
+-            if node.comment and node.comment[1]:
+-                for ct in node.comment[1]:
+-                    ct.reset()
+-            item_comments = comment.items
+-            for v in item_comments.values():
+-                if v and v[1]:
+-                    for ct in v[1]:
+-                        ct.reset()
+-            try:
+-                node.comment.append(comment.end)
+-            except AttributeError:
+-                pass
+-        except AttributeError:
+-            item_comments = {}
+-        merge_list = [m[1] for m in getattr(mapping, merge_attrib, [])]
+-        try:
+-            merge_pos = getattr(mapping, merge_attrib, [[0]])[0][0]
+-        except IndexError:
+-            merge_pos = 0
+-        item_count = 0
+-        if bool(merge_list):
+-            items = mapping.non_merged_items()
+-        else:
+-            items = mapping.items()
+-        for item_key, item_value in items:
+-            item_count += 1
+-            node_key = self.represent_key(item_key)
+-            node_value = self.represent_data(item_value)
+-            item_comment = item_comments.get(item_key)
+-            if item_comment:
+-                assert getattr(node_key, 'comment', None) is None
+-                node_key.comment = item_comment[:2]
+-                nvc = getattr(node_value, 'comment', None)
+-                if nvc is not None:  # end comment already there
+-                    nvc[0] = item_comment[2]
+-                    nvc[1] = item_comment[3]
+-                else:
+-                    node_value.comment = item_comment[2:]
+-            if not (isinstance(node_key, ScalarNode) and not node_key.style):
+-                best_style = False
+-            if not (isinstance(node_value, ScalarNode) and not node_value.style):
+-                best_style = False
+-            value.append((node_key, node_value))
+-        if flow_style is None:
+-            if ((item_count != 0) or bool(merge_list)) and self.default_flow_style is not None:
+-                node.flow_style = self.default_flow_style
+-            else:
+-                node.flow_style = best_style
+-        if bool(merge_list):
+-            # because of the call to represent_data here, the anchors
+-            # are marked as being used and thereby created
+-            if len(merge_list) == 1:
+-                arg = self.represent_data(merge_list[0])
+-            else:
+-                arg = self.represent_data(merge_list)
+-                arg.flow_style = True
+-            value.insert(merge_pos, (ScalarNode(u'tag:yaml.org,2002:merge', '<<'), arg))
+-        return node
+-
+-    def represent_omap(self, tag, omap, flow_style=None):
+-        # type: (Any, Any, Any) -> Any
+-        value = []  # type: List[Any]
+-        try:
+-            flow_style = omap.fa.flow_style(flow_style)
+-        except AttributeError:
+-            flow_style = flow_style
+-        try:
+-            anchor = omap.yaml_anchor()
+-        except AttributeError:
+-            anchor = None
+-        node = SequenceNode(tag, value, flow_style=flow_style, anchor=anchor)
+-        if self.alias_key is not None:
+-            self.represented_objects[self.alias_key] = node
+-        best_style = True
+-        try:
+-            comment = getattr(omap, comment_attrib)
+-            node.comment = comment.comment
+-            if node.comment and node.comment[1]:
+-                for ct in node.comment[1]:
+-                    ct.reset()
+-            item_comments = comment.items
+-            for v in item_comments.values():
+-                if v and v[1]:
+-                    for ct in v[1]:
+-                        ct.reset()
+-            try:
+-                node.comment.append(comment.end)
+-            except AttributeError:
+-                pass
+-        except AttributeError:
+-            item_comments = {}
+-        for item_key in omap:
+-            item_val = omap[item_key]
+-            node_item = self.represent_data({item_key: item_val})
+-            # node_item.flow_style = False
+-            # node item has two scalars in value: node_key and node_value
+-            item_comment = item_comments.get(item_key)
+-            if item_comment:
+-                if item_comment[1]:
+-                    node_item.comment = [None, item_comment[1]]
+-                assert getattr(node_item.value[0][0], 'comment', None) is None
+-                node_item.value[0][0].comment = [item_comment[0], None]
+-                nvc = getattr(node_item.value[0][1], 'comment', None)
+-                if nvc is not None:  # end comment already there
+-                    nvc[0] = item_comment[2]
+-                    nvc[1] = item_comment[3]
+-                else:
+-                    node_item.value[0][1].comment = item_comment[2:]
+-            # if not (isinstance(node_item, ScalarNode) \
+-            #    and not node_item.style):
+-            #     best_style = False
+-            value.append(node_item)
+-        if flow_style is None:
+-            if self.default_flow_style is not None:
+-                node.flow_style = self.default_flow_style
+-            else:
+-                node.flow_style = best_style
+-        return node
+-
+-    def represent_set(self, setting):
+-        # type: (Any) -> Any
+-        flow_style = False
+-        tag = u'tag:yaml.org,2002:set'
+-        # return self.represent_mapping(tag, value)
+-        value = []  # type: List[Any]
+-        flow_style = setting.fa.flow_style(flow_style)
+-        try:
+-            anchor = setting.yaml_anchor()
+-        except AttributeError:
+-            anchor = None
+-        node = MappingNode(tag, value, flow_style=flow_style, anchor=anchor)
+-        if self.alias_key is not None:
+-            self.represented_objects[self.alias_key] = node
+-        best_style = True
+-        # no sorting! !!
+-        try:
+-            comment = getattr(setting, comment_attrib)
+-            node.comment = comment.comment
+-            if node.comment and node.comment[1]:
+-                for ct in node.comment[1]:
+-                    ct.reset()
+-            item_comments = comment.items
+-            for v in item_comments.values():
+-                if v and v[1]:
+-                    for ct in v[1]:
+-                        ct.reset()
+-            try:
+-                node.comment.append(comment.end)
+-            except AttributeError:
+-                pass
+-        except AttributeError:
+-            item_comments = {}
+-        for item_key in setting.odict:
+-            node_key = self.represent_key(item_key)
+-            node_value = self.represent_data(None)
+-            item_comment = item_comments.get(item_key)
+-            if item_comment:
+-                assert getattr(node_key, 'comment', None) is None
+-                node_key.comment = item_comment[:2]
+-            node_key.style = node_value.style = '?'
+-            if not (isinstance(node_key, ScalarNode) and not node_key.style):
+-                best_style = False
+-            if not (isinstance(node_value, ScalarNode) and not node_value.style):
+-                best_style = False
+-            value.append((node_key, node_value))
+-        best_style = best_style
+-        return node
+-
+-    def represent_dict(self, data):
+-        # type: (Any) -> Any
+-        """write out tag if saved on loading"""
+-        try:
+-            t = data.tag.value
+-        except AttributeError:
+-            t = None
+-        if t:
+-            if t.startswith('!!'):
+-                tag = 'tag:yaml.org,2002:' + t[2:]
+-            else:
+-                tag = t
+-        else:
+-            tag = u'tag:yaml.org,2002:map'
+-        return self.represent_mapping(tag, data)
+-
+-    def represent_list(self, data):
+-        # type: (Any) -> Any
+-        try:
+-            t = data.tag.value
+-        except AttributeError:
+-            t = None
+-        if t:
+-            if t.startswith('!!'):
+-                tag = 'tag:yaml.org,2002:' + t[2:]
+-            else:
+-                tag = t
+-        else:
+-            tag = u'tag:yaml.org,2002:seq'
+-        return self.represent_sequence(tag, data)
+-
+-    def represent_datetime(self, data):
+-        # type: (Any) -> Any
+-        inter = 'T' if data._yaml['t'] else ' '
+-        _yaml = data._yaml
+-        if _yaml['delta']:
+-            data += _yaml['delta']
+-            value = data.isoformat(inter)
+-        else:
+-            value = data.isoformat(inter)
+-        if _yaml['tz']:
+-            value += _yaml['tz']
+-        return self.represent_scalar(u'tag:yaml.org,2002:timestamp', to_unicode(value))
+-
+-    def represent_tagged_scalar(self, data):
+-        # type: (Any) -> Any
+-        try:
+-            tag = data.tag.value
+-        except AttributeError:
+-            tag = None
+-        try:
+-            anchor = data.yaml_anchor()
+-        except AttributeError:
+-            anchor = None
+-        return self.represent_scalar(tag, data.value, style=data.style, anchor=anchor)
+-
+-    def represent_scalar_bool(self, data):
+-        # type: (Any) -> Any
+-        try:
+-            anchor = data.yaml_anchor()
+-        except AttributeError:
+-            anchor = None
+-        return SafeRepresenter.represent_bool(self, data, anchor=anchor)
+-
+-
+-RoundTripRepresenter.add_representer(type(None), RoundTripRepresenter.represent_none)
+-
+-RoundTripRepresenter.add_representer(
+-    LiteralScalarString, RoundTripRepresenter.represent_literal_scalarstring
+-)
+-
+-RoundTripRepresenter.add_representer(
+-    FoldedScalarString, RoundTripRepresenter.represent_folded_scalarstring
+-)
+-
+-RoundTripRepresenter.add_representer(
+-    SingleQuotedScalarString, RoundTripRepresenter.represent_single_quoted_scalarstring
+-)
+-
+-RoundTripRepresenter.add_representer(
+-    DoubleQuotedScalarString, RoundTripRepresenter.represent_double_quoted_scalarstring
+-)
+-
+-RoundTripRepresenter.add_representer(
+-    PlainScalarString, RoundTripRepresenter.represent_plain_scalarstring
+-)
+-
+-# RoundTripRepresenter.add_representer(tuple, Representer.represent_tuple)
+-
+-RoundTripRepresenter.add_representer(ScalarInt, RoundTripRepresenter.represent_scalar_int)
+-
+-RoundTripRepresenter.add_representer(BinaryInt, RoundTripRepresenter.represent_binary_int)
+-
+-RoundTripRepresenter.add_representer(OctalInt, RoundTripRepresenter.represent_octal_int)
+-
+-RoundTripRepresenter.add_representer(HexInt, RoundTripRepresenter.represent_hex_int)
+-
+-RoundTripRepresenter.add_representer(HexCapsInt, RoundTripRepresenter.represent_hex_caps_int)
+-
+-RoundTripRepresenter.add_representer(ScalarFloat, RoundTripRepresenter.represent_scalar_float)
+-
+-RoundTripRepresenter.add_representer(ScalarBoolean, RoundTripRepresenter.represent_scalar_bool)
+-
+-RoundTripRepresenter.add_representer(CommentedSeq, RoundTripRepresenter.represent_list)
+-
+-RoundTripRepresenter.add_representer(CommentedMap, RoundTripRepresenter.represent_dict)
+-
+-RoundTripRepresenter.add_representer(
+-    CommentedOrderedMap, RoundTripRepresenter.represent_ordereddict
+-)
+-
+-if sys.version_info >= (2, 7):
+-    import collections
+-
+-    RoundTripRepresenter.add_representer(
+-        collections.OrderedDict, RoundTripRepresenter.represent_ordereddict
+-    )
+-
+-RoundTripRepresenter.add_representer(CommentedSet, RoundTripRepresenter.represent_set)
+-
+-RoundTripRepresenter.add_representer(
+-    TaggedScalar, RoundTripRepresenter.represent_tagged_scalar
+-)
+-
+-RoundTripRepresenter.add_representer(TimeStamp, RoundTripRepresenter.represent_datetime)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/resolver.py b/dynaconf/vendor_src/ruamel/yaml/resolver.py
+deleted file mode 100644
+index d771d80..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/resolver.py
++++ /dev/null
+@@ -1,399 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import
+-
+-import re
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, List, Union, Text, Optional  # NOQA
+-    from .compat import VersionType  # NOQA
+-
+-from .compat import string_types, _DEFAULT_YAML_VERSION  # NOQA
+-from .error import *  # NOQA
+-from .nodes import MappingNode, ScalarNode, SequenceNode  # NOQA
+-from .util import RegExp  # NOQA
+-
+-__all__ = ['BaseResolver', 'Resolver', 'VersionedResolver']
+-
+-
+-# fmt: off
+-# resolvers consist of
+-# - a list of applicable version
+-# - a tag
+-# - a regexp
+-# - a list of first characters to match
+-implicit_resolvers = [
+-    ([(1, 2)],
+-        u'tag:yaml.org,2002:bool',
+-        RegExp(u'''^(?:true|True|TRUE|false|False|FALSE)$''', re.X),
+-        list(u'tTfF')),
+-    ([(1, 1)],
+-        u'tag:yaml.org,2002:bool',
+-        RegExp(u'''^(?:y|Y|yes|Yes|YES|n|N|no|No|NO
+-        |true|True|TRUE|false|False|FALSE
+-        |on|On|ON|off|Off|OFF)$''', re.X),
+-        list(u'yYnNtTfFoO')),
+-    ([(1, 2)],
+-        u'tag:yaml.org,2002:float',
+-        RegExp(u'''^(?:
+-         [-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)?
+-        |[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+)
+-        |[-+]?\\.[0-9_]+(?:[eE][-+][0-9]+)?
+-        |[-+]?\\.(?:inf|Inf|INF)
+-        |\\.(?:nan|NaN|NAN))$''', re.X),
+-        list(u'-+0123456789.')),
+-    ([(1, 1)],
+-        u'tag:yaml.org,2002:float',
+-        RegExp(u'''^(?:
+-         [-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)?
+-        |[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+)
+-        |\\.[0-9_]+(?:[eE][-+][0-9]+)?
+-        |[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\\.[0-9_]*  # sexagesimal float
+-        |[-+]?\\.(?:inf|Inf|INF)
+-        |\\.(?:nan|NaN|NAN))$''', re.X),
+-        list(u'-+0123456789.')),
+-    ([(1, 2)],
+-        u'tag:yaml.org,2002:int',
+-        RegExp(u'''^(?:[-+]?0b[0-1_]+
+-        |[-+]?0o?[0-7_]+
+-        |[-+]?[0-9_]+
+-        |[-+]?0x[0-9a-fA-F_]+)$''', re.X),
+-        list(u'-+0123456789')),
+-    ([(1, 1)],
+-        u'tag:yaml.org,2002:int',
+-        RegExp(u'''^(?:[-+]?0b[0-1_]+
+-        |[-+]?0?[0-7_]+
+-        |[-+]?(?:0|[1-9][0-9_]*)
+-        |[-+]?0x[0-9a-fA-F_]+
+-        |[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),  # sexagesimal int
+-        list(u'-+0123456789')),
+-    ([(1, 2), (1, 1)],
+-        u'tag:yaml.org,2002:merge',
+-        RegExp(u'^(?:<<)$'),
+-        [u'<']),
+-    ([(1, 2), (1, 1)],
+-        u'tag:yaml.org,2002:null',
+-        RegExp(u'''^(?: ~
+-        |null|Null|NULL
+-        | )$''', re.X),
+-        [u'~', u'n', u'N', u'']),
+-    ([(1, 2), (1, 1)],
+-        u'tag:yaml.org,2002:timestamp',
+-        RegExp(u'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
+-        |[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
+-        (?:[Tt]|[ \\t]+)[0-9][0-9]?
+-        :[0-9][0-9] :[0-9][0-9] (?:\\.[0-9]*)?
+-        (?:[ \\t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
+-        list(u'0123456789')),
+-    ([(1, 2), (1, 1)],
+-        u'tag:yaml.org,2002:value',
+-        RegExp(u'^(?:=)$'),
+-        [u'=']),
+-    # The following resolver is only for documentation purposes. It cannot work
+-    # because plain scalars cannot start with '!', '&', or '*'.
+-    ([(1, 2), (1, 1)],
+-        u'tag:yaml.org,2002:yaml',
+-        RegExp(u'^(?:!|&|\\*)$'),
+-        list(u'!&*')),
+-]
+-# fmt: on
+-
+-
+-class ResolverError(YAMLError):
+-    pass
+-
+-
+-class BaseResolver(object):
+-
+-    DEFAULT_SCALAR_TAG = u'tag:yaml.org,2002:str'
+-    DEFAULT_SEQUENCE_TAG = u'tag:yaml.org,2002:seq'
+-    DEFAULT_MAPPING_TAG = u'tag:yaml.org,2002:map'
+-
+-    yaml_implicit_resolvers = {}  # type: Dict[Any, Any]
+-    yaml_path_resolvers = {}  # type: Dict[Any, Any]
+-
+-    def __init__(self, loadumper=None):
+-        # type: (Any, Any) -> None
+-        self.loadumper = loadumper
+-        if self.loadumper is not None and getattr(self.loadumper, '_resolver', None) is None:
+-            self.loadumper._resolver = self.loadumper
+-        self._loader_version = None  # type: Any
+-        self.resolver_exact_paths = []  # type: List[Any]
+-        self.resolver_prefix_paths = []  # type: List[Any]
+-
+-    @property
+-    def parser(self):
+-        # type: () -> Any
+-        if self.loadumper is not None:
+-            if hasattr(self.loadumper, 'typ'):
+-                return self.loadumper.parser
+-            return self.loadumper._parser
+-        return None
+-
+-    @classmethod
+-    def add_implicit_resolver_base(cls, tag, regexp, first):
+-        # type: (Any, Any, Any) -> None
+-        if 'yaml_implicit_resolvers' not in cls.__dict__:
+-            # deepcopy doesn't work here
+-            cls.yaml_implicit_resolvers = dict(
+-                (k, cls.yaml_implicit_resolvers[k][:]) for k in cls.yaml_implicit_resolvers
+-            )
+-        if first is None:
+-            first = [None]
+-        for ch in first:
+-            cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
+-
+-    @classmethod
+-    def add_implicit_resolver(cls, tag, regexp, first):
+-        # type: (Any, Any, Any) -> None
+-        if 'yaml_implicit_resolvers' not in cls.__dict__:
+-            # deepcopy doesn't work here
+-            cls.yaml_implicit_resolvers = dict(
+-                (k, cls.yaml_implicit_resolvers[k][:]) for k in cls.yaml_implicit_resolvers
+-            )
+-        if first is None:
+-            first = [None]
+-        for ch in first:
+-            cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
+-        implicit_resolvers.append(([(1, 2), (1, 1)], tag, regexp, first))
+-
+-    # @classmethod
+-    # def add_implicit_resolver(cls, tag, regexp, first):
+-
+-    @classmethod
+-    def add_path_resolver(cls, tag, path, kind=None):
+-        # type: (Any, Any, Any) -> None
+-        # Note: `add_path_resolver` is experimental.  The API could be changed.
+-        # `new_path` is a pattern that is matched against the path from the
+-        # root to the node that is being considered.  `node_path` elements are
+-        # tuples `(node_check, index_check)`.  `node_check` is a node class:
+-        # `ScalarNode`, `SequenceNode`, `MappingNode` or `None`.  `None`
+-        # matches any kind of a node.  `index_check` could be `None`, a boolean
+-        # value, a string value, or a number.  `None` and `False` match against
+-        # any _value_ of sequence and mapping nodes.  `True` matches against
+-        # any _key_ of a mapping node.  A string `index_check` matches against
+-        # a mapping value that corresponds to a scalar key which content is
+-        # equal to the `index_check` value.  An integer `index_check` matches
+-        # against a sequence value with the index equal to `index_check`.
+-        if 'yaml_path_resolvers' not in cls.__dict__:
+-            cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy()
+-        new_path = []  # type: List[Any]
+-        for element in path:
+-            if isinstance(element, (list, tuple)):
+-                if len(element) == 2:
+-                    node_check, index_check = element
+-                elif len(element) == 1:
+-                    node_check = element[0]
+-                    index_check = True
+-                else:
+-                    raise ResolverError('Invalid path element: %s' % (element,))
+-            else:
+-                node_check = None
+-                index_check = element
+-            if node_check is str:
+-                node_check = ScalarNode
+-            elif node_check is list:
+-                node_check = SequenceNode
+-            elif node_check is dict:
+-                node_check = MappingNode
+-            elif (
+-                node_check not in [ScalarNode, SequenceNode, MappingNode]
+-                and not isinstance(node_check, string_types)
+-                and node_check is not None
+-            ):
+-                raise ResolverError('Invalid node checker: %s' % (node_check,))
+-            if not isinstance(index_check, (string_types, int)) and index_check is not None:
+-                raise ResolverError('Invalid index checker: %s' % (index_check,))
+-            new_path.append((node_check, index_check))
+-        if kind is str:
+-            kind = ScalarNode
+-        elif kind is list:
+-            kind = SequenceNode
+-        elif kind is dict:
+-            kind = MappingNode
+-        elif kind not in [ScalarNode, SequenceNode, MappingNode] and kind is not None:
+-            raise ResolverError('Invalid node kind: %s' % (kind,))
+-        cls.yaml_path_resolvers[tuple(new_path), kind] = tag
+-
+-    def descend_resolver(self, current_node, current_index):
+-        # type: (Any, Any) -> None
+-        if not self.yaml_path_resolvers:
+-            return
+-        exact_paths = {}
+-        prefix_paths = []
+-        if current_node:
+-            depth = len(self.resolver_prefix_paths)
+-            for path, kind in self.resolver_prefix_paths[-1]:
+-                if self.check_resolver_prefix(depth, path, kind, current_node, current_index):
+-                    if len(path) > depth:
+-                        prefix_paths.append((path, kind))
+-                    else:
+-                        exact_paths[kind] = self.yaml_path_resolvers[path, kind]
+-        else:
+-            for path, kind in self.yaml_path_resolvers:
+-                if not path:
+-                    exact_paths[kind] = self.yaml_path_resolvers[path, kind]
+-                else:
+-                    prefix_paths.append((path, kind))
+-        self.resolver_exact_paths.append(exact_paths)
+-        self.resolver_prefix_paths.append(prefix_paths)
+-
+-    def ascend_resolver(self):
+-        # type: () -> None
+-        if not self.yaml_path_resolvers:
+-            return
+-        self.resolver_exact_paths.pop()
+-        self.resolver_prefix_paths.pop()
+-
+-    def check_resolver_prefix(self, depth, path, kind, current_node, current_index):
+-        # type: (int, Text, Any, Any, Any) -> bool
+-        node_check, index_check = path[depth - 1]
+-        if isinstance(node_check, string_types):
+-            if current_node.tag != node_check:
+-                return False
+-        elif node_check is not None:
+-            if not isinstance(current_node, node_check):
+-                return False
+-        if index_check is True and current_index is not None:
+-            return False
+-        if (index_check is False or index_check is None) and current_index is None:
+-            return False
+-        if isinstance(index_check, string_types):
+-            if not (
+-                isinstance(current_index, ScalarNode) and index_check == current_index.value
+-            ):
+-                return False
+-        elif isinstance(index_check, int) and not isinstance(index_check, bool):
+-            if index_check != current_index:
+-                return False
+-        return True
+-
+-    def resolve(self, kind, value, implicit):
+-        # type: (Any, Any, Any) -> Any
+-        if kind is ScalarNode and implicit[0]:
+-            if value == "":
+-                resolvers = self.yaml_implicit_resolvers.get("", [])
+-            else:
+-                resolvers = self.yaml_implicit_resolvers.get(value[0], [])
+-            resolvers += self.yaml_implicit_resolvers.get(None, [])
+-            for tag, regexp in resolvers:
+-                if regexp.match(value):
+-                    return tag
+-            implicit = implicit[1]
+-        if bool(self.yaml_path_resolvers):
+-            exact_paths = self.resolver_exact_paths[-1]
+-            if kind in exact_paths:
+-                return exact_paths[kind]
+-            if None in exact_paths:
+-                return exact_paths[None]
+-        if kind is ScalarNode:
+-            return self.DEFAULT_SCALAR_TAG
+-        elif kind is SequenceNode:
+-            return self.DEFAULT_SEQUENCE_TAG
+-        elif kind is MappingNode:
+-            return self.DEFAULT_MAPPING_TAG
+-
+-    @property
+-    def processing_version(self):
+-        # type: () -> Any
+-        return None
+-
+-
+-class Resolver(BaseResolver):
+-    pass
+-
+-
+-for ir in implicit_resolvers:
+-    if (1, 2) in ir[0]:
+-        Resolver.add_implicit_resolver_base(*ir[1:])
+-
+-
+-class VersionedResolver(BaseResolver):
+-    """
+-    contrary to the "normal" resolver, the smart resolver delays loading
+-    the pattern matching rules. That way it can decide to load 1.1 rules
+-    or the (default) 1.2 rules, that no longer support octal without 0o, sexagesimals
+-    and Yes/No/On/Off booleans.
+-    """
+-
+-    def __init__(self, version=None, loader=None, loadumper=None):
+-        # type: (Optional[VersionType], Any, Any) -> None
+-        if loader is None and loadumper is not None:
+-            loader = loadumper
+-        BaseResolver.__init__(self, loader)
+-        self._loader_version = self.get_loader_version(version)
+-        self._version_implicit_resolver = {}  # type: Dict[Any, Any]
+-
+-    def add_version_implicit_resolver(self, version, tag, regexp, first):
+-        # type: (VersionType, Any, Any, Any) -> None
+-        if first is None:
+-            first = [None]
+-        impl_resolver = self._version_implicit_resolver.setdefault(version, {})
+-        for ch in first:
+-            impl_resolver.setdefault(ch, []).append((tag, regexp))
+-
+-    def get_loader_version(self, version):
+-        # type: (Optional[VersionType]) -> Any
+-        if version is None or isinstance(version, tuple):
+-            return version
+-        if isinstance(version, list):
+-            return tuple(version)
+-        # assume string
+-        return tuple(map(int, version.split(u'.')))
+-
+-    @property
+-    def versioned_resolver(self):
+-        # type: () -> Any
+-        """
+-        select the resolver based on the version we are parsing
+-        """
+-        version = self.processing_version
+-        if version not in self._version_implicit_resolver:
+-            for x in implicit_resolvers:
+-                if version in x[0]:
+-                    self.add_version_implicit_resolver(version, x[1], x[2], x[3])
+-        return self._version_implicit_resolver[version]
+-
+-    def resolve(self, kind, value, implicit):
+-        # type: (Any, Any, Any) -> Any
+-        if kind is ScalarNode and implicit[0]:
+-            if value == "":
+-                resolvers = self.versioned_resolver.get("", [])
+-            else:
+-                resolvers = self.versioned_resolver.get(value[0], [])
+-            resolvers += self.versioned_resolver.get(None, [])
+-            for tag, regexp in resolvers:
+-                if regexp.match(value):
+-                    return tag
+-            implicit = implicit[1]
+-        if bool(self.yaml_path_resolvers):
+-            exact_paths = self.resolver_exact_paths[-1]
+-            if kind in exact_paths:
+-                return exact_paths[kind]
+-            if None in exact_paths:
+-                return exact_paths[None]
+-        if kind is ScalarNode:
+-            return self.DEFAULT_SCALAR_TAG
+-        elif kind is SequenceNode:
+-            return self.DEFAULT_SEQUENCE_TAG
+-        elif kind is MappingNode:
+-            return self.DEFAULT_MAPPING_TAG
+-
+-    @property
+-    def processing_version(self):
+-        # type: () -> Any
+-        try:
+-            version = self.loadumper._scanner.yaml_version
+-        except AttributeError:
+-            try:
+-                if hasattr(self.loadumper, 'typ'):
+-                    version = self.loadumper.version
+-                else:
+-                    version = self.loadumper._serializer.use_version  # dumping
+-            except AttributeError:
+-                version = None
+-        if version is None:
+-            version = self._loader_version
+-            if version is None:
+-                version = _DEFAULT_YAML_VERSION
+-        return version
+diff --git a/dynaconf/vendor_src/ruamel/yaml/scalarbool.py b/dynaconf/vendor_src/ruamel/yaml/scalarbool.py
+deleted file mode 100644
+index e3ea2f2..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/scalarbool.py
++++ /dev/null
+@@ -1,51 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function, absolute_import, division, unicode_literals
+-
+-"""
+-You cannot subclass bool, and this is necessary for round-tripping anchored
+-bool values (and also if you want to preserve the original way of writing)
+-
+-bool.__bases__ is type 'int', so that is what is used as the basis for ScalarBoolean as well.
+-
+-You can use these in an if statement, but not when testing equivalence
+-"""
+-
+-from .anchor import Anchor
+-
+-if False:  # MYPY
+-    from typing import Text, Any, Dict, List  # NOQA
+-
+-__all__ = ['ScalarBoolean']
+-
+-# no need for no_limit_int -> int
+-
+-
+-class ScalarBoolean(int):
+-    def __new__(cls, *args, **kw):
+-        # type: (Any, Any, Any) -> Any
+-        anchor = kw.pop('anchor', None)  # type: ignore
+-        b = int.__new__(cls, *args, **kw)  # type: ignore
+-        if anchor is not None:
+-            b.yaml_set_anchor(anchor, always_dump=True)
+-        return b
+-
+-    @property
+-    def anchor(self):
+-        # type: () -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            setattr(self, Anchor.attrib, Anchor())
+-        return getattr(self, Anchor.attrib)
+-
+-    def yaml_anchor(self, any=False):
+-        # type: (bool) -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            return None
+-        if any or self.anchor.always_dump:
+-            return self.anchor
+-        return None
+-
+-    def yaml_set_anchor(self, value, always_dump=False):
+-        # type: (Any, bool) -> None
+-        self.anchor.value = value
+-        self.anchor.always_dump = always_dump
+diff --git a/dynaconf/vendor_src/ruamel/yaml/scalarfloat.py b/dynaconf/vendor_src/ruamel/yaml/scalarfloat.py
+deleted file mode 100644
+index 9553cd5..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/scalarfloat.py
++++ /dev/null
+@@ -1,127 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function, absolute_import, division, unicode_literals
+-
+-import sys
+-from .compat import no_limit_int  # NOQA
+-from .anchor import Anchor
+-
+-if False:  # MYPY
+-    from typing import Text, Any, Dict, List  # NOQA
+-
+-__all__ = ['ScalarFloat', 'ExponentialFloat', 'ExponentialCapsFloat']
+-
+-
+-class ScalarFloat(float):
+-    def __new__(cls, *args, **kw):
+-        # type: (Any, Any, Any) -> Any
+-        width = kw.pop('width', None)  # type: ignore
+-        prec = kw.pop('prec', None)  # type: ignore
+-        m_sign = kw.pop('m_sign', None)  # type: ignore
+-        m_lead0 = kw.pop('m_lead0', 0)  # type: ignore
+-        exp = kw.pop('exp', None)  # type: ignore
+-        e_width = kw.pop('e_width', None)  # type: ignore
+-        e_sign = kw.pop('e_sign', None)  # type: ignore
+-        underscore = kw.pop('underscore', None)  # type: ignore
+-        anchor = kw.pop('anchor', None)  # type: ignore
+-        v = float.__new__(cls, *args, **kw)  # type: ignore
+-        v._width = width
+-        v._prec = prec
+-        v._m_sign = m_sign
+-        v._m_lead0 = m_lead0
+-        v._exp = exp
+-        v._e_width = e_width
+-        v._e_sign = e_sign
+-        v._underscore = underscore
+-        if anchor is not None:
+-            v.yaml_set_anchor(anchor, always_dump=True)
+-        return v
+-
+-    def __iadd__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        return float(self) + a
+-        x = type(self)(self + a)
+-        x._width = self._width
+-        x._underscore = self._underscore[:] if self._underscore is not None else None  # NOQA
+-        return x
+-
+-    def __ifloordiv__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        return float(self) // a
+-        x = type(self)(self // a)
+-        x._width = self._width
+-        x._underscore = self._underscore[:] if self._underscore is not None else None  # NOQA
+-        return x
+-
+-    def __imul__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        return float(self) * a
+-        x = type(self)(self * a)
+-        x._width = self._width
+-        x._underscore = self._underscore[:] if self._underscore is not None else None  # NOQA
+-        x._prec = self._prec  # check for others
+-        return x
+-
+-    def __ipow__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        return float(self) ** a
+-        x = type(self)(self ** a)
+-        x._width = self._width
+-        x._underscore = self._underscore[:] if self._underscore is not None else None  # NOQA
+-        return x
+-
+-    def __isub__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        return float(self) - a
+-        x = type(self)(self - a)
+-        x._width = self._width
+-        x._underscore = self._underscore[:] if self._underscore is not None else None  # NOQA
+-        return x
+-
+-    @property
+-    def anchor(self):
+-        # type: () -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            setattr(self, Anchor.attrib, Anchor())
+-        return getattr(self, Anchor.attrib)
+-
+-    def yaml_anchor(self, any=False):
+-        # type: (bool) -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            return None
+-        if any or self.anchor.always_dump:
+-            return self.anchor
+-        return None
+-
+-    def yaml_set_anchor(self, value, always_dump=False):
+-        # type: (Any, bool) -> None
+-        self.anchor.value = value
+-        self.anchor.always_dump = always_dump
+-
+-    def dump(self, out=sys.stdout):
+-        # type: (Any) -> Any
+-        out.write(
+-            'ScalarFloat({}| w:{}, p:{}, s:{}, lz:{}, _:{}|{}, w:{}, s:{})\n'.format(
+-                self,
+-                self._width,  # type: ignore
+-                self._prec,  # type: ignore
+-                self._m_sign,  # type: ignore
+-                self._m_lead0,  # type: ignore
+-                self._underscore,  # type: ignore
+-                self._exp,  # type: ignore
+-                self._e_width,  # type: ignore
+-                self._e_sign,  # type: ignore
+-            )
+-        )
+-
+-
+-class ExponentialFloat(ScalarFloat):
+-    def __new__(cls, value, width=None, underscore=None):
+-        # type: (Any, Any, Any) -> Any
+-        return ScalarFloat.__new__(cls, value, width=width, underscore=underscore)
+-
+-
+-class ExponentialCapsFloat(ScalarFloat):
+-    def __new__(cls, value, width=None, underscore=None):
+-        # type: (Any, Any, Any) -> Any
+-        return ScalarFloat.__new__(cls, value, width=width, underscore=underscore)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/scalarint.py b/dynaconf/vendor_src/ruamel/yaml/scalarint.py
+deleted file mode 100644
+index 305af25..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/scalarint.py
++++ /dev/null
+@@ -1,130 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function, absolute_import, division, unicode_literals
+-
+-from .compat import no_limit_int  # NOQA
+-from .anchor import Anchor
+-
+-if False:  # MYPY
+-    from typing import Text, Any, Dict, List  # NOQA
+-
+-__all__ = ['ScalarInt', 'BinaryInt', 'OctalInt', 'HexInt', 'HexCapsInt', 'DecimalInt']
+-
+-
+-class ScalarInt(no_limit_int):
+-    def __new__(cls, *args, **kw):
+-        # type: (Any, Any, Any) -> Any
+-        width = kw.pop('width', None)  # type: ignore
+-        underscore = kw.pop('underscore', None)  # type: ignore
+-        anchor = kw.pop('anchor', None)  # type: ignore
+-        v = no_limit_int.__new__(cls, *args, **kw)  # type: ignore
+-        v._width = width
+-        v._underscore = underscore
+-        if anchor is not None:
+-            v.yaml_set_anchor(anchor, always_dump=True)
+-        return v
+-
+-    def __iadd__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        x = type(self)(self + a)
+-        x._width = self._width  # type: ignore
+-        x._underscore = (  # type: ignore
+-            self._underscore[:] if self._underscore is not None else None  # type: ignore
+-        )  # NOQA
+-        return x
+-
+-    def __ifloordiv__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        x = type(self)(self // a)
+-        x._width = self._width  # type: ignore
+-        x._underscore = (  # type: ignore
+-            self._underscore[:] if self._underscore is not None else None  # type: ignore
+-        )  # NOQA
+-        return x
+-
+-    def __imul__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        x = type(self)(self * a)
+-        x._width = self._width  # type: ignore
+-        x._underscore = (  # type: ignore
+-            self._underscore[:] if self._underscore is not None else None  # type: ignore
+-        )  # NOQA
+-        return x
+-
+-    def __ipow__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        x = type(self)(self ** a)
+-        x._width = self._width  # type: ignore
+-        x._underscore = (  # type: ignore
+-            self._underscore[:] if self._underscore is not None else None  # type: ignore
+-        )  # NOQA
+-        return x
+-
+-    def __isub__(self, a):  # type: ignore
+-        # type: (Any) -> Any
+-        x = type(self)(self - a)
+-        x._width = self._width  # type: ignore
+-        x._underscore = (  # type: ignore
+-            self._underscore[:] if self._underscore is not None else None  # type: ignore
+-        )  # NOQA
+-        return x
+-
+-    @property
+-    def anchor(self):
+-        # type: () -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            setattr(self, Anchor.attrib, Anchor())
+-        return getattr(self, Anchor.attrib)
+-
+-    def yaml_anchor(self, any=False):
+-        # type: (bool) -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            return None
+-        if any or self.anchor.always_dump:
+-            return self.anchor
+-        return None
+-
+-    def yaml_set_anchor(self, value, always_dump=False):
+-        # type: (Any, bool) -> None
+-        self.anchor.value = value
+-        self.anchor.always_dump = always_dump
+-
+-
+-class BinaryInt(ScalarInt):
+-    def __new__(cls, value, width=None, underscore=None, anchor=None):
+-        # type: (Any, Any, Any, Any) -> Any
+-        return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
+-
+-
+-class OctalInt(ScalarInt):
+-    def __new__(cls, value, width=None, underscore=None, anchor=None):
+-        # type: (Any, Any, Any, Any) -> Any
+-        return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
+-
+-
+-# mixed casing of A-F is not supported, when loading the first non digit
+-# determines the case
+-
+-
+-class HexInt(ScalarInt):
+-    """uses lower case (a-f)"""
+-
+-    def __new__(cls, value, width=None, underscore=None, anchor=None):
+-        # type: (Any, Any, Any, Any) -> Any
+-        return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
+-
+-
+-class HexCapsInt(ScalarInt):
+-    """uses upper case (A-F)"""
+-
+-    def __new__(cls, value, width=None, underscore=None, anchor=None):
+-        # type: (Any, Any, Any, Any) -> Any
+-        return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
+-
+-
+-class DecimalInt(ScalarInt):
+-    """needed if anchor"""
+-
+-    def __new__(cls, value, width=None, underscore=None, anchor=None):
+-        # type: (Any, Any, Any, Any) -> Any
+-        return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/scalarstring.py b/dynaconf/vendor_src/ruamel/yaml/scalarstring.py
+deleted file mode 100644
+index 2ec4383..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/scalarstring.py
++++ /dev/null
+@@ -1,156 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function, absolute_import, division, unicode_literals
+-
+-from .compat import text_type
+-from .anchor import Anchor
+-
+-if False:  # MYPY
+-    from typing import Text, Any, Dict, List  # NOQA
+-
+-__all__ = [
+-    'ScalarString',
+-    'LiteralScalarString',
+-    'FoldedScalarString',
+-    'SingleQuotedScalarString',
+-    'DoubleQuotedScalarString',
+-    'PlainScalarString',
+-    # PreservedScalarString is the old name, as it was the first to be preserved on rt,
+-    # use LiteralScalarString instead
+-    'PreservedScalarString',
+-]
+-
+-
+-class ScalarString(text_type):
+-    __slots__ = Anchor.attrib
+-
+-    def __new__(cls, *args, **kw):
+-        # type: (Any, Any) -> Any
+-        anchor = kw.pop('anchor', None)  # type: ignore
+-        ret_val = text_type.__new__(cls, *args, **kw)  # type: ignore
+-        if anchor is not None:
+-            ret_val.yaml_set_anchor(anchor, always_dump=True)
+-        return ret_val
+-
+-    def replace(self, old, new, maxreplace=-1):
+-        # type: (Any, Any, int) -> Any
+-        return type(self)((text_type.replace(self, old, new, maxreplace)))
+-
+-    @property
+-    def anchor(self):
+-        # type: () -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            setattr(self, Anchor.attrib, Anchor())
+-        return getattr(self, Anchor.attrib)
+-
+-    def yaml_anchor(self, any=False):
+-        # type: (bool) -> Any
+-        if not hasattr(self, Anchor.attrib):
+-            return None
+-        if any or self.anchor.always_dump:
+-            return self.anchor
+-        return None
+-
+-    def yaml_set_anchor(self, value, always_dump=False):
+-        # type: (Any, bool) -> None
+-        self.anchor.value = value
+-        self.anchor.always_dump = always_dump
+-
+-
+-class LiteralScalarString(ScalarString):
+-    __slots__ = 'comment'  # the comment after the | on the first line
+-
+-    style = '|'
+-
+-    def __new__(cls, value, anchor=None):
+-        # type: (Text, Any) -> Any
+-        return ScalarString.__new__(cls, value, anchor=anchor)
+-
+-
+-PreservedScalarString = LiteralScalarString
+-
+-
+-class FoldedScalarString(ScalarString):
+-    __slots__ = ('fold_pos', 'comment')  # the comment after the > on the first line
+-
+-    style = '>'
+-
+-    def __new__(cls, value, anchor=None):
+-        # type: (Text, Any) -> Any
+-        return ScalarString.__new__(cls, value, anchor=anchor)
+-
+-
+-class SingleQuotedScalarString(ScalarString):
+-    __slots__ = ()
+-
+-    style = "'"
+-
+-    def __new__(cls, value, anchor=None):
+-        # type: (Text, Any) -> Any
+-        return ScalarString.__new__(cls, value, anchor=anchor)
+-
+-
+-class DoubleQuotedScalarString(ScalarString):
+-    __slots__ = ()
+-
+-    style = '"'
+-
+-    def __new__(cls, value, anchor=None):
+-        # type: (Text, Any) -> Any
+-        return ScalarString.__new__(cls, value, anchor=anchor)
+-
+-
+-class PlainScalarString(ScalarString):
+-    __slots__ = ()
+-
+-    style = ''
+-
+-    def __new__(cls, value, anchor=None):
+-        # type: (Text, Any) -> Any
+-        return ScalarString.__new__(cls, value, anchor=anchor)
+-
+-
+-def preserve_literal(s):
+-    # type: (Text) -> Text
+-    return LiteralScalarString(s.replace('\r\n', '\n').replace('\r', '\n'))
+-
+-
+-def walk_tree(base, map=None):
+-    # type: (Any, Any) -> None
+-    """
+-    the routine here walks over a simple yaml tree (recursing in
+-    dict values and list items) and converts strings that
+-    have multiple lines to literal scalars
+-
+-    You can also provide an explicit (ordered) mapping for multiple transforms
+-    (first of which is executed):
+-        map = ruamel.yaml.compat.ordereddict
+-        map['\n'] = preserve_literal
+-        map[':'] = SingleQuotedScalarString
+-        walk_tree(data, map=map)
+-    """
+-    from dynaconf.vendor.ruamel.yaml.compat import string_types
+-    from dynaconf.vendor.ruamel.yaml.compat import MutableMapping, MutableSequence  # type: ignore
+-
+-    if map is None:
+-        map = {'\n': preserve_literal}
+-
+-    if isinstance(base, MutableMapping):
+-        for k in base:
+-            v = base[k]  # type: Text
+-            if isinstance(v, string_types):
+-                for ch in map:
+-                    if ch in v:
+-                        base[k] = map[ch](v)
+-                        break
+-            else:
+-                walk_tree(v)
+-    elif isinstance(base, MutableSequence):
+-        for idx, elem in enumerate(base):
+-            if isinstance(elem, string_types):
+-                for ch in map:
+-                    if ch in elem:  # type: ignore
+-                        base[idx] = map[ch](elem)
+-                        break
+-            else:
+-                walk_tree(elem)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/scanner.py b/dynaconf/vendor_src/ruamel/yaml/scanner.py
+deleted file mode 100644
+index 7872a4c..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/scanner.py
++++ /dev/null
+@@ -1,1980 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function, absolute_import, division, unicode_literals
+-
+-# Scanner produces tokens of the following types:
+-# STREAM-START
+-# STREAM-END
+-# DIRECTIVE(name, value)
+-# DOCUMENT-START
+-# DOCUMENT-END
+-# BLOCK-SEQUENCE-START
+-# BLOCK-MAPPING-START
+-# BLOCK-END
+-# FLOW-SEQUENCE-START
+-# FLOW-MAPPING-START
+-# FLOW-SEQUENCE-END
+-# FLOW-MAPPING-END
+-# BLOCK-ENTRY
+-# FLOW-ENTRY
+-# KEY
+-# VALUE
+-# ALIAS(value)
+-# ANCHOR(value)
+-# TAG(value)
+-# SCALAR(value, plain, style)
+-#
+-# RoundTripScanner
+-# COMMENT(value)
+-#
+-# Read comments in the Scanner code for more details.
+-#
+-
+-from .error import MarkedYAMLError
+-from .tokens import *  # NOQA
+-from .compat import utf8, unichr, PY3, check_anchorname_char, nprint  # NOQA
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List, Union, Text  # NOQA
+-    from .compat import VersionType  # NOQA
+-
+-__all__ = ['Scanner', 'RoundTripScanner', 'ScannerError']
+-
+-
+-_THE_END = '\n\0\r\x85\u2028\u2029'
+-_THE_END_SPACE_TAB = ' \n\0\t\r\x85\u2028\u2029'
+-_SPACE_TAB = ' \t'
+-
+-
+-class ScannerError(MarkedYAMLError):
+-    pass
+-
+-
+-class SimpleKey(object):
+-    # See below simple keys treatment.
+-
+-    def __init__(self, token_number, required, index, line, column, mark):
+-        # type: (Any, Any, int, int, int, Any) -> None
+-        self.token_number = token_number
+-        self.required = required
+-        self.index = index
+-        self.line = line
+-        self.column = column
+-        self.mark = mark
+-
+-
+-class Scanner(object):
+-    def __init__(self, loader=None):
+-        # type: (Any) -> None
+-        """Initialize the scanner."""
+-        # It is assumed that Scanner and Reader will have a common descendant.
+-        # Reader do the dirty work of checking for BOM and converting the
+-        # input data to Unicode. It also adds NUL to the end.
+-        #
+-        # Reader supports the following methods
+-        #   self.peek(i=0)    # peek the next i-th character
+-        #   self.prefix(l=1)  # peek the next l characters
+-        #   self.forward(l=1) # read the next l characters and move the pointer
+-
+-        self.loader = loader
+-        if self.loader is not None and getattr(self.loader, '_scanner', None) is None:
+-            self.loader._scanner = self
+-        self.reset_scanner()
+-        self.first_time = False
+-        self.yaml_version = None  # type: Any
+-
+-    @property
+-    def flow_level(self):
+-        # type: () -> int
+-        return len(self.flow_context)
+-
+-    def reset_scanner(self):
+-        # type: () -> None
+-        # Had we reached the end of the stream?
+-        self.done = False
+-
+-        # flow_context is an expanding/shrinking list consisting of '{' and '['
+-        # for each unclosed flow context. If empty list that means block context
+-        self.flow_context = []  # type: List[Text]
+-
+-        # List of processed tokens that are not yet emitted.
+-        self.tokens = []  # type: List[Any]
+-
+-        # Add the STREAM-START token.
+-        self.fetch_stream_start()
+-
+-        # Number of tokens that were emitted through the `get_token` method.
+-        self.tokens_taken = 0
+-
+-        # The current indentation level.
+-        self.indent = -1
+-
+-        # Past indentation levels.
+-        self.indents = []  # type: List[int]
+-
+-        # Variables related to simple keys treatment.
+-
+-        # A simple key is a key that is not denoted by the '?' indicator.
+-        # Example of simple keys:
+-        #   ---
+-        #   block simple key: value
+-        #   ? not a simple key:
+-        #   : { flow simple key: value }
+-        # We emit the KEY token before all keys, so when we find a potential
+-        # simple key, we try to locate the corresponding ':' indicator.
+-        # Simple keys should be limited to a single line and 1024 characters.
+-
+-        # Can a simple key start at the current position? A simple key may
+-        # start:
+-        # - at the beginning of the line, not counting indentation spaces
+-        #       (in block context),
+-        # - after '{', '[', ',' (in the flow context),
+-        # - after '?', ':', '-' (in the block context).
+-        # In the block context, this flag also signifies if a block collection
+-        # may start at the current position.
+-        self.allow_simple_key = True
+-
+-        # Keep track of possible simple keys. This is a dictionary. The key
+-        # is `flow_level`; there can be no more that one possible simple key
+-        # for each level. The value is a SimpleKey record:
+-        #   (token_number, required, index, line, column, mark)
+-        # A simple key may start with ALIAS, ANCHOR, TAG, SCALAR(flow),
+-        # '[', or '{' tokens.
+-        self.possible_simple_keys = {}  # type: Dict[Any, Any]
+-
+-    @property
+-    def reader(self):
+-        # type: () -> Any
+-        try:
+-            return self._scanner_reader  # type: ignore
+-        except AttributeError:
+-            if hasattr(self.loader, 'typ'):
+-                self._scanner_reader = self.loader.reader
+-            else:
+-                self._scanner_reader = self.loader._reader
+-            return self._scanner_reader
+-
+-    @property
+-    def scanner_processing_version(self):  # prefix until un-composited
+-        # type: () -> Any
+-        if hasattr(self.loader, 'typ'):
+-            return self.loader.resolver.processing_version
+-        return self.loader.processing_version
+-
+-    # Public methods.
+-
+-    def check_token(self, *choices):
+-        # type: (Any) -> bool
+-        # Check if the next token is one of the given types.
+-        while self.need_more_tokens():
+-            self.fetch_more_tokens()
+-        if bool(self.tokens):
+-            if not choices:
+-                return True
+-            for choice in choices:
+-                if isinstance(self.tokens[0], choice):
+-                    return True
+-        return False
+-
+-    def peek_token(self):
+-        # type: () -> Any
+-        # Return the next token, but do not delete if from the queue.
+-        while self.need_more_tokens():
+-            self.fetch_more_tokens()
+-        if bool(self.tokens):
+-            return self.tokens[0]
+-
+-    def get_token(self):
+-        # type: () -> Any
+-        # Return the next token.
+-        while self.need_more_tokens():
+-            self.fetch_more_tokens()
+-        if bool(self.tokens):
+-            self.tokens_taken += 1
+-            return self.tokens.pop(0)
+-
+-    # Private methods.
+-
+-    def need_more_tokens(self):
+-        # type: () -> bool
+-        if self.done:
+-            return False
+-        if not self.tokens:
+-            return True
+-        # The current token may be a potential simple key, so we
+-        # need to look further.
+-        self.stale_possible_simple_keys()
+-        if self.next_possible_simple_key() == self.tokens_taken:
+-            return True
+-        return False
+-
+-    def fetch_comment(self, comment):
+-        # type: (Any) -> None
+-        raise NotImplementedError
+-
+-    def fetch_more_tokens(self):
+-        # type: () -> Any
+-        # Eat whitespaces and comments until we reach the next token.
+-        comment = self.scan_to_next_token()
+-        if comment is not None:  # never happens for base scanner
+-            return self.fetch_comment(comment)
+-        # Remove obsolete possible simple keys.
+-        self.stale_possible_simple_keys()
+-
+-        # Compare the current indentation and column. It may add some tokens
+-        # and decrease the current indentation level.
+-        self.unwind_indent(self.reader.column)
+-
+-        # Peek the next character.
+-        ch = self.reader.peek()
+-
+-        # Is it the end of stream?
+-        if ch == '\0':
+-            return self.fetch_stream_end()
+-
+-        # Is it a directive?
+-        if ch == '%' and self.check_directive():
+-            return self.fetch_directive()
+-
+-        # Is it the document start?
+-        if ch == '-' and self.check_document_start():
+-            return self.fetch_document_start()
+-
+-        # Is it the document end?
+-        if ch == '.' and self.check_document_end():
+-            return self.fetch_document_end()
+-
+-        # TODO: support for BOM within a stream.
+-        # if ch == u'\uFEFF':
+-        #     return self.fetch_bom()    <-- issue BOMToken
+-
+-        # Note: the order of the following checks is NOT significant.
+-
+-        # Is it the flow sequence start indicator?
+-        if ch == '[':
+-            return self.fetch_flow_sequence_start()
+-
+-        # Is it the flow mapping start indicator?
+-        if ch == '{':
+-            return self.fetch_flow_mapping_start()
+-
+-        # Is it the flow sequence end indicator?
+-        if ch == ']':
+-            return self.fetch_flow_sequence_end()
+-
+-        # Is it the flow mapping end indicator?
+-        if ch == '}':
+-            return self.fetch_flow_mapping_end()
+-
+-        # Is it the flow entry indicator?
+-        if ch == ',':
+-            return self.fetch_flow_entry()
+-
+-        # Is it the block entry indicator?
+-        if ch == '-' and self.check_block_entry():
+-            return self.fetch_block_entry()
+-
+-        # Is it the key indicator?
+-        if ch == '?' and self.check_key():
+-            return self.fetch_key()
+-
+-        # Is it the value indicator?
+-        if ch == ':' and self.check_value():
+-            return self.fetch_value()
+-
+-        # Is it an alias?
+-        if ch == '*':
+-            return self.fetch_alias()
+-
+-        # Is it an anchor?
+-        if ch == '&':
+-            return self.fetch_anchor()
+-
+-        # Is it a tag?
+-        if ch == '!':
+-            return self.fetch_tag()
+-
+-        # Is it a literal scalar?
+-        if ch == '|' and not self.flow_level:
+-            return self.fetch_literal()
+-
+-        # Is it a folded scalar?
+-        if ch == '>' and not self.flow_level:
+-            return self.fetch_folded()
+-
+-        # Is it a single quoted scalar?
+-        if ch == "'":
+-            return self.fetch_single()
+-
+-        # Is it a double quoted scalar?
+-        if ch == '"':
+-            return self.fetch_double()
+-
+-        # It must be a plain scalar then.
+-        if self.check_plain():
+-            return self.fetch_plain()
+-
+-        # No? It's an error. Let's produce a nice error message.
+-        raise ScannerError(
+-            'while scanning for the next token',
+-            None,
+-            'found character %r that cannot start any token' % utf8(ch),
+-            self.reader.get_mark(),
+-        )
+-
+-    # Simple keys treatment.
+-
+-    def next_possible_simple_key(self):
+-        # type: () -> Any
+-        # Return the number of the nearest possible simple key. Actually we
+-        # don't need to loop through the whole dictionary. We may replace it
+-        # with the following code:
+-        #   if not self.possible_simple_keys:
+-        #       return None
+-        #   return self.possible_simple_keys[
+-        #           min(self.possible_simple_keys.keys())].token_number
+-        min_token_number = None
+-        for level in self.possible_simple_keys:
+-            key = self.possible_simple_keys[level]
+-            if min_token_number is None or key.token_number < min_token_number:
+-                min_token_number = key.token_number
+-        return min_token_number
+-
+-    def stale_possible_simple_keys(self):
+-        # type: () -> None
+-        # Remove entries that are no longer possible simple keys. According to
+-        # the YAML specification, simple keys
+-        # - should be limited to a single line,
+-        # - should be no longer than 1024 characters.
+-        # Disabling this procedure will allow simple keys of any length and
+-        # height (may cause problems if indentation is broken though).
+-        for level in list(self.possible_simple_keys):
+-            key = self.possible_simple_keys[level]
+-            if key.line != self.reader.line or self.reader.index - key.index > 1024:
+-                if key.required:
+-                    raise ScannerError(
+-                        'while scanning a simple key',
+-                        key.mark,
+-                        "could not find expected ':'",
+-                        self.reader.get_mark(),
+-                    )
+-                del self.possible_simple_keys[level]
+-
+-    def save_possible_simple_key(self):
+-        # type: () -> None
+-        # The next token may start a simple key. We check if it's possible
+-        # and save its position. This function is called for
+-        #   ALIAS, ANCHOR, TAG, SCALAR(flow), '[', and '{'.
+-
+-        # Check if a simple key is required at the current position.
+-        required = not self.flow_level and self.indent == self.reader.column
+-
+-        # The next token might be a simple key. Let's save it's number and
+-        # position.
+-        if self.allow_simple_key:
+-            self.remove_possible_simple_key()
+-            token_number = self.tokens_taken + len(self.tokens)
+-            key = SimpleKey(
+-                token_number,
+-                required,
+-                self.reader.index,
+-                self.reader.line,
+-                self.reader.column,
+-                self.reader.get_mark(),
+-            )
+-            self.possible_simple_keys[self.flow_level] = key
+-
+-    def remove_possible_simple_key(self):
+-        # type: () -> None
+-        # Remove the saved possible key position at the current flow level.
+-        if self.flow_level in self.possible_simple_keys:
+-            key = self.possible_simple_keys[self.flow_level]
+-
+-            if key.required:
+-                raise ScannerError(
+-                    'while scanning a simple key',
+-                    key.mark,
+-                    "could not find expected ':'",
+-                    self.reader.get_mark(),
+-                )
+-
+-            del self.possible_simple_keys[self.flow_level]
+-
+-    # Indentation functions.
+-
+-    def unwind_indent(self, column):
+-        # type: (Any) -> None
+-        # In flow context, tokens should respect indentation.
+-        # Actually the condition should be `self.indent >= column` according to
+-        # the spec. But this condition will prohibit intuitively correct
+-        # constructions such as
+-        # key : {
+-        # }
+-        # ####
+-        # if self.flow_level and self.indent > column:
+-        #     raise ScannerError(None, None,
+-        #             "invalid intendation or unclosed '[' or '{'",
+-        #             self.reader.get_mark())
+-
+-        # In the flow context, indentation is ignored. We make the scanner less
+-        # restrictive then specification requires.
+-        if bool(self.flow_level):
+-            return
+-
+-        # In block context, we may need to issue the BLOCK-END tokens.
+-        while self.indent > column:
+-            mark = self.reader.get_mark()
+-            self.indent = self.indents.pop()
+-            self.tokens.append(BlockEndToken(mark, mark))
+-
+-    def add_indent(self, column):
+-        # type: (int) -> bool
+-        # Check if we need to increase indentation.
+-        if self.indent < column:
+-            self.indents.append(self.indent)
+-            self.indent = column
+-            return True
+-        return False
+-
+-    # Fetchers.
+-
+-    def fetch_stream_start(self):
+-        # type: () -> None
+-        # We always add STREAM-START as the first token and STREAM-END as the
+-        # last token.
+-        # Read the token.
+-        mark = self.reader.get_mark()
+-        # Add STREAM-START.
+-        self.tokens.append(StreamStartToken(mark, mark, encoding=self.reader.encoding))
+-
+-    def fetch_stream_end(self):
+-        # type: () -> None
+-        # Set the current intendation to -1.
+-        self.unwind_indent(-1)
+-        # Reset simple keys.
+-        self.remove_possible_simple_key()
+-        self.allow_simple_key = False
+-        self.possible_simple_keys = {}
+-        # Read the token.
+-        mark = self.reader.get_mark()
+-        # Add STREAM-END.
+-        self.tokens.append(StreamEndToken(mark, mark))
+-        # The steam is finished.
+-        self.done = True
+-
+-    def fetch_directive(self):
+-        # type: () -> None
+-        # Set the current intendation to -1.
+-        self.unwind_indent(-1)
+-
+-        # Reset simple keys.
+-        self.remove_possible_simple_key()
+-        self.allow_simple_key = False
+-
+-        # Scan and add DIRECTIVE.
+-        self.tokens.append(self.scan_directive())
+-
+-    def fetch_document_start(self):
+-        # type: () -> None
+-        self.fetch_document_indicator(DocumentStartToken)
+-
+-    def fetch_document_end(self):
+-        # type: () -> None
+-        self.fetch_document_indicator(DocumentEndToken)
+-
+-    def fetch_document_indicator(self, TokenClass):
+-        # type: (Any) -> None
+-        # Set the current intendation to -1.
+-        self.unwind_indent(-1)
+-
+-        # Reset simple keys. Note that there could not be a block collection
+-        # after '---'.
+-        self.remove_possible_simple_key()
+-        self.allow_simple_key = False
+-
+-        # Add DOCUMENT-START or DOCUMENT-END.
+-        start_mark = self.reader.get_mark()
+-        self.reader.forward(3)
+-        end_mark = self.reader.get_mark()
+-        self.tokens.append(TokenClass(start_mark, end_mark))
+-
+-    def fetch_flow_sequence_start(self):
+-        # type: () -> None
+-        self.fetch_flow_collection_start(FlowSequenceStartToken, to_push='[')
+-
+-    def fetch_flow_mapping_start(self):
+-        # type: () -> None
+-        self.fetch_flow_collection_start(FlowMappingStartToken, to_push='{')
+-
+-    def fetch_flow_collection_start(self, TokenClass, to_push):
+-        # type: (Any, Text) -> None
+-        # '[' and '{' may start a simple key.
+-        self.save_possible_simple_key()
+-        # Increase the flow level.
+-        self.flow_context.append(to_push)
+-        # Simple keys are allowed after '[' and '{'.
+-        self.allow_simple_key = True
+-        # Add FLOW-SEQUENCE-START or FLOW-MAPPING-START.
+-        start_mark = self.reader.get_mark()
+-        self.reader.forward()
+-        end_mark = self.reader.get_mark()
+-        self.tokens.append(TokenClass(start_mark, end_mark))
+-
+-    def fetch_flow_sequence_end(self):
+-        # type: () -> None
+-        self.fetch_flow_collection_end(FlowSequenceEndToken)
+-
+-    def fetch_flow_mapping_end(self):
+-        # type: () -> None
+-        self.fetch_flow_collection_end(FlowMappingEndToken)
+-
+-    def fetch_flow_collection_end(self, TokenClass):
+-        # type: (Any) -> None
+-        # Reset possible simple key on the current level.
+-        self.remove_possible_simple_key()
+-        # Decrease the flow level.
+-        try:
+-            popped = self.flow_context.pop()  # NOQA
+-        except IndexError:
+-            # We must not be in a list or object.
+-            # Defer error handling to the parser.
+-            pass
+-        # No simple keys after ']' or '}'.
+-        self.allow_simple_key = False
+-        # Add FLOW-SEQUENCE-END or FLOW-MAPPING-END.
+-        start_mark = self.reader.get_mark()
+-        self.reader.forward()
+-        end_mark = self.reader.get_mark()
+-        self.tokens.append(TokenClass(start_mark, end_mark))
+-
+-    def fetch_flow_entry(self):
+-        # type: () -> None
+-        # Simple keys are allowed after ','.
+-        self.allow_simple_key = True
+-        # Reset possible simple key on the current level.
+-        self.remove_possible_simple_key()
+-        # Add FLOW-ENTRY.
+-        start_mark = self.reader.get_mark()
+-        self.reader.forward()
+-        end_mark = self.reader.get_mark()
+-        self.tokens.append(FlowEntryToken(start_mark, end_mark))
+-
+-    def fetch_block_entry(self):
+-        # type: () -> None
+-        # Block context needs additional checks.
+-        if not self.flow_level:
+-            # Are we allowed to start a new entry?
+-            if not self.allow_simple_key:
+-                raise ScannerError(
+-                    None, None, 'sequence entries are not allowed here', self.reader.get_mark()
+-                )
+-            # We may need to add BLOCK-SEQUENCE-START.
+-            if self.add_indent(self.reader.column):
+-                mark = self.reader.get_mark()
+-                self.tokens.append(BlockSequenceStartToken(mark, mark))
+-        # It's an error for the block entry to occur in the flow context,
+-        # but we let the parser detect this.
+-        else:
+-            pass
+-        # Simple keys are allowed after '-'.
+-        self.allow_simple_key = True
+-        # Reset possible simple key on the current level.
+-        self.remove_possible_simple_key()
+-
+-        # Add BLOCK-ENTRY.
+-        start_mark = self.reader.get_mark()
+-        self.reader.forward()
+-        end_mark = self.reader.get_mark()
+-        self.tokens.append(BlockEntryToken(start_mark, end_mark))
+-
+-    def fetch_key(self):
+-        # type: () -> None
+-        # Block context needs additional checks.
+-        if not self.flow_level:
+-
+-            # Are we allowed to start a key (not nessesary a simple)?
+-            if not self.allow_simple_key:
+-                raise ScannerError(
+-                    None, None, 'mapping keys are not allowed here', self.reader.get_mark()
+-                )
+-
+-            # We may need to add BLOCK-MAPPING-START.
+-            if self.add_indent(self.reader.column):
+-                mark = self.reader.get_mark()
+-                self.tokens.append(BlockMappingStartToken(mark, mark))
+-
+-        # Simple keys are allowed after '?' in the block context.
+-        self.allow_simple_key = not self.flow_level
+-
+-        # Reset possible simple key on the current level.
+-        self.remove_possible_simple_key()
+-
+-        # Add KEY.
+-        start_mark = self.reader.get_mark()
+-        self.reader.forward()
+-        end_mark = self.reader.get_mark()
+-        self.tokens.append(KeyToken(start_mark, end_mark))
+-
+-    def fetch_value(self):
+-        # type: () -> None
+-        # Do we determine a simple key?
+-        if self.flow_level in self.possible_simple_keys:
+-            # Add KEY.
+-            key = self.possible_simple_keys[self.flow_level]
+-            del self.possible_simple_keys[self.flow_level]
+-            self.tokens.insert(
+-                key.token_number - self.tokens_taken, KeyToken(key.mark, key.mark)
+-            )
+-
+-            # If this key starts a new block mapping, we need to add
+-            # BLOCK-MAPPING-START.
+-            if not self.flow_level:
+-                if self.add_indent(key.column):
+-                    self.tokens.insert(
+-                        key.token_number - self.tokens_taken,
+-                        BlockMappingStartToken(key.mark, key.mark),
+-                    )
+-
+-            # There cannot be two simple keys one after another.
+-            self.allow_simple_key = False
+-
+-        # It must be a part of a complex key.
+-        else:
+-
+-            # Block context needs additional checks.
+-            # (Do we really need them? They will be caught by the parser
+-            # anyway.)
+-            if not self.flow_level:
+-
+-                # We are allowed to start a complex value if and only if
+-                # we can start a simple key.
+-                if not self.allow_simple_key:
+-                    raise ScannerError(
+-                        None,
+-                        None,
+-                        'mapping values are not allowed here',
+-                        self.reader.get_mark(),
+-                    )
+-
+-            # If this value starts a new block mapping, we need to add
+-            # BLOCK-MAPPING-START.  It will be detected as an error later by
+-            # the parser.
+-            if not self.flow_level:
+-                if self.add_indent(self.reader.column):
+-                    mark = self.reader.get_mark()
+-                    self.tokens.append(BlockMappingStartToken(mark, mark))
+-
+-            # Simple keys are allowed after ':' in the block context.
+-            self.allow_simple_key = not self.flow_level
+-
+-            # Reset possible simple key on the current level.
+-            self.remove_possible_simple_key()
+-
+-        # Add VALUE.
+-        start_mark = self.reader.get_mark()
+-        self.reader.forward()
+-        end_mark = self.reader.get_mark()
+-        self.tokens.append(ValueToken(start_mark, end_mark))
+-
+-    def fetch_alias(self):
+-        # type: () -> None
+-        # ALIAS could be a simple key.
+-        self.save_possible_simple_key()
+-        # No simple keys after ALIAS.
+-        self.allow_simple_key = False
+-        # Scan and add ALIAS.
+-        self.tokens.append(self.scan_anchor(AliasToken))
+-
+-    def fetch_anchor(self):
+-        # type: () -> None
+-        # ANCHOR could start a simple key.
+-        self.save_possible_simple_key()
+-        # No simple keys after ANCHOR.
+-        self.allow_simple_key = False
+-        # Scan and add ANCHOR.
+-        self.tokens.append(self.scan_anchor(AnchorToken))
+-
+-    def fetch_tag(self):
+-        # type: () -> None
+-        # TAG could start a simple key.
+-        self.save_possible_simple_key()
+-        # No simple keys after TAG.
+-        self.allow_simple_key = False
+-        # Scan and add TAG.
+-        self.tokens.append(self.scan_tag())
+-
+-    def fetch_literal(self):
+-        # type: () -> None
+-        self.fetch_block_scalar(style='|')
+-
+-    def fetch_folded(self):
+-        # type: () -> None
+-        self.fetch_block_scalar(style='>')
+-
+-    def fetch_block_scalar(self, style):
+-        # type: (Any) -> None
+-        # A simple key may follow a block scalar.
+-        self.allow_simple_key = True
+-        # Reset possible simple key on the current level.
+-        self.remove_possible_simple_key()
+-        # Scan and add SCALAR.
+-        self.tokens.append(self.scan_block_scalar(style))
+-
+-    def fetch_single(self):
+-        # type: () -> None
+-        self.fetch_flow_scalar(style="'")
+-
+-    def fetch_double(self):
+-        # type: () -> None
+-        self.fetch_flow_scalar(style='"')
+-
+-    def fetch_flow_scalar(self, style):
+-        # type: (Any) -> None
+-        # A flow scalar could be a simple key.
+-        self.save_possible_simple_key()
+-        # No simple keys after flow scalars.
+-        self.allow_simple_key = False
+-        # Scan and add SCALAR.
+-        self.tokens.append(self.scan_flow_scalar(style))
+-
+-    def fetch_plain(self):
+-        # type: () -> None
+-        # A plain scalar could be a simple key.
+-        self.save_possible_simple_key()
+-        # No simple keys after plain scalars. But note that `scan_plain` will
+-        # change this flag if the scan is finished at the beginning of the
+-        # line.
+-        self.allow_simple_key = False
+-        # Scan and add SCALAR. May change `allow_simple_key`.
+-        self.tokens.append(self.scan_plain())
+-
+-    # Checkers.
+-
+-    def check_directive(self):
+-        # type: () -> Any
+-        # DIRECTIVE:        ^ '%' ...
+-        # The '%' indicator is already checked.
+-        if self.reader.column == 0:
+-            return True
+-        return None
+-
+-    def check_document_start(self):
+-        # type: () -> Any
+-        # DOCUMENT-START:   ^ '---' (' '|'\n')
+-        if self.reader.column == 0:
+-            if self.reader.prefix(3) == '---' and self.reader.peek(3) in _THE_END_SPACE_TAB:
+-                return True
+-        return None
+-
+-    def check_document_end(self):
+-        # type: () -> Any
+-        # DOCUMENT-END:     ^ '...' (' '|'\n')
+-        if self.reader.column == 0:
+-            if self.reader.prefix(3) == '...' and self.reader.peek(3) in _THE_END_SPACE_TAB:
+-                return True
+-        return None
+-
+-    def check_block_entry(self):
+-        # type: () -> Any
+-        # BLOCK-ENTRY:      '-' (' '|'\n')
+-        return self.reader.peek(1) in _THE_END_SPACE_TAB
+-
+-    def check_key(self):
+-        # type: () -> Any
+-        # KEY(flow context):    '?'
+-        if bool(self.flow_level):
+-            return True
+-        # KEY(block context):   '?' (' '|'\n')
+-        return self.reader.peek(1) in _THE_END_SPACE_TAB
+-
+-    def check_value(self):
+-        # type: () -> Any
+-        # VALUE(flow context):  ':'
+-        if self.scanner_processing_version == (1, 1):
+-            if bool(self.flow_level):
+-                return True
+-        else:
+-            if bool(self.flow_level):
+-                if self.flow_context[-1] == '[':
+-                    if self.reader.peek(1) not in _THE_END_SPACE_TAB:
+-                        return False
+-                elif self.tokens and isinstance(self.tokens[-1], ValueToken):
+-                    # mapping flow context scanning a value token
+-                    if self.reader.peek(1) not in _THE_END_SPACE_TAB:
+-                        return False
+-                return True
+-        # VALUE(block context): ':' (' '|'\n')
+-        return self.reader.peek(1) in _THE_END_SPACE_TAB
+-
+-    def check_plain(self):
+-        # type: () -> Any
+-        # A plain scalar may start with any non-space character except:
+-        #   '-', '?', ':', ',', '[', ']', '{', '}',
+-        #   '#', '&', '*', '!', '|', '>', '\'', '\"',
+-        #   '%', '@', '`'.
+-        #
+-        # It may also start with
+-        #   '-', '?', ':'
+-        # if it is followed by a non-space character.
+-        #
+-        # Note that we limit the last rule to the block context (except the
+-        # '-' character) because we want the flow context to be space
+-        # independent.
+-        srp = self.reader.peek
+-        ch = srp()
+-        if self.scanner_processing_version == (1, 1):
+-            return ch not in '\0 \t\r\n\x85\u2028\u2029-?:,[]{}#&*!|>\'"%@`' or (
+-                srp(1) not in _THE_END_SPACE_TAB
+-                and (ch == '-' or (not self.flow_level and ch in '?:'))
+-            )
+-        # YAML 1.2
+-        if ch not in '\0 \t\r\n\x85\u2028\u2029-?:,[]{}#&*!|>\'"%@`':
+-            # ###################                ^ ???
+-            return True
+-        ch1 = srp(1)
+-        if ch == '-' and ch1 not in _THE_END_SPACE_TAB:
+-            return True
+-        if ch == ':' and bool(self.flow_level) and ch1 not in _SPACE_TAB:
+-            return True
+-
+-        return srp(1) not in _THE_END_SPACE_TAB and (
+-            ch == '-' or (not self.flow_level and ch in '?:')
+-        )
+-
+-    # Scanners.
+-
+-    def scan_to_next_token(self):
+-        # type: () -> Any
+-        # We ignore spaces, line breaks and comments.
+-        # If we find a line break in the block context, we set the flag
+-        # `allow_simple_key` on.
+-        # The byte order mark is stripped if it's the first character in the
+-        # stream. We do not yet support BOM inside the stream as the
+-        # specification requires. Any such mark will be considered as a part
+-        # of the document.
+-        #
+-        # TODO: We need to make tab handling rules more sane. A good rule is
+-        #   Tabs cannot precede tokens
+-        #   BLOCK-SEQUENCE-START, BLOCK-MAPPING-START, BLOCK-END,
+-        #   KEY(block), VALUE(block), BLOCK-ENTRY
+-        # So the checking code is
+-        #   if <TAB>:
+-        #       self.allow_simple_keys = False
+-        # We also need to add the check for `allow_simple_keys == True` to
+-        # `unwind_indent` before issuing BLOCK-END.
+-        # Scanners for block, flow, and plain scalars need to be modified.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        if self.reader.index == 0 and srp() == '\uFEFF':
+-            srf()
+-        found = False
+-        _the_end = _THE_END
+-        while not found:
+-            while srp() == ' ':
+-                srf()
+-            if srp() == '#':
+-                while srp() not in _the_end:
+-                    srf()
+-            if self.scan_line_break():
+-                if not self.flow_level:
+-                    self.allow_simple_key = True
+-            else:
+-                found = True
+-        return None
+-
+-    def scan_directive(self):
+-        # type: () -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        start_mark = self.reader.get_mark()
+-        srf()
+-        name = self.scan_directive_name(start_mark)
+-        value = None
+-        if name == 'YAML':
+-            value = self.scan_yaml_directive_value(start_mark)
+-            end_mark = self.reader.get_mark()
+-        elif name == 'TAG':
+-            value = self.scan_tag_directive_value(start_mark)
+-            end_mark = self.reader.get_mark()
+-        else:
+-            end_mark = self.reader.get_mark()
+-            while srp() not in _THE_END:
+-                srf()
+-        self.scan_directive_ignored_line(start_mark)
+-        return DirectiveToken(name, value, start_mark, end_mark)
+-
+-    def scan_directive_name(self, start_mark):
+-        # type: (Any) -> Any
+-        # See the specification for details.
+-        length = 0
+-        srp = self.reader.peek
+-        ch = srp(length)
+-        while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z' or ch in '-_:.':
+-            length += 1
+-            ch = srp(length)
+-        if not length:
+-            raise ScannerError(
+-                'while scanning a directive',
+-                start_mark,
+-                'expected alphabetic or numeric character, but found %r' % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        value = self.reader.prefix(length)
+-        self.reader.forward(length)
+-        ch = srp()
+-        if ch not in '\0 \r\n\x85\u2028\u2029':
+-            raise ScannerError(
+-                'while scanning a directive',
+-                start_mark,
+-                'expected alphabetic or numeric character, but found %r' % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        return value
+-
+-    def scan_yaml_directive_value(self, start_mark):
+-        # type: (Any) -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        while srp() == ' ':
+-            srf()
+-        major = self.scan_yaml_directive_number(start_mark)
+-        if srp() != '.':
+-            raise ScannerError(
+-                'while scanning a directive',
+-                start_mark,
+-                "expected a digit or '.', but found %r" % utf8(srp()),
+-                self.reader.get_mark(),
+-            )
+-        srf()
+-        minor = self.scan_yaml_directive_number(start_mark)
+-        if srp() not in '\0 \r\n\x85\u2028\u2029':
+-            raise ScannerError(
+-                'while scanning a directive',
+-                start_mark,
+-                "expected a digit or ' ', but found %r" % utf8(srp()),
+-                self.reader.get_mark(),
+-            )
+-        self.yaml_version = (major, minor)
+-        return self.yaml_version
+-
+-    def scan_yaml_directive_number(self, start_mark):
+-        # type: (Any) -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        ch = srp()
+-        if not ('0' <= ch <= '9'):
+-            raise ScannerError(
+-                'while scanning a directive',
+-                start_mark,
+-                'expected a digit, but found %r' % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        length = 0
+-        while '0' <= srp(length) <= '9':
+-            length += 1
+-        value = int(self.reader.prefix(length))
+-        srf(length)
+-        return value
+-
+-    def scan_tag_directive_value(self, start_mark):
+-        # type: (Any) -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        while srp() == ' ':
+-            srf()
+-        handle = self.scan_tag_directive_handle(start_mark)
+-        while srp() == ' ':
+-            srf()
+-        prefix = self.scan_tag_directive_prefix(start_mark)
+-        return (handle, prefix)
+-
+-    def scan_tag_directive_handle(self, start_mark):
+-        # type: (Any) -> Any
+-        # See the specification for details.
+-        value = self.scan_tag_handle('directive', start_mark)
+-        ch = self.reader.peek()
+-        if ch != ' ':
+-            raise ScannerError(
+-                'while scanning a directive',
+-                start_mark,
+-                "expected ' ', but found %r" % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        return value
+-
+-    def scan_tag_directive_prefix(self, start_mark):
+-        # type: (Any) -> Any
+-        # See the specification for details.
+-        value = self.scan_tag_uri('directive', start_mark)
+-        ch = self.reader.peek()
+-        if ch not in '\0 \r\n\x85\u2028\u2029':
+-            raise ScannerError(
+-                'while scanning a directive',
+-                start_mark,
+-                "expected ' ', but found %r" % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        return value
+-
+-    def scan_directive_ignored_line(self, start_mark):
+-        # type: (Any) -> None
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        while srp() == ' ':
+-            srf()
+-        if srp() == '#':
+-            while srp() not in _THE_END:
+-                srf()
+-        ch = srp()
+-        if ch not in _THE_END:
+-            raise ScannerError(
+-                'while scanning a directive',
+-                start_mark,
+-                'expected a comment or a line break, but found %r' % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        self.scan_line_break()
+-
+-    def scan_anchor(self, TokenClass):
+-        # type: (Any) -> Any
+-        # The specification does not restrict characters for anchors and
+-        # aliases. This may lead to problems, for instance, the document:
+-        #   [ *alias, value ]
+-        # can be interpteted in two ways, as
+-        #   [ "value" ]
+-        # and
+-        #   [ *alias , "value" ]
+-        # Therefore we restrict aliases to numbers and ASCII letters.
+-        srp = self.reader.peek
+-        start_mark = self.reader.get_mark()
+-        indicator = srp()
+-        if indicator == '*':
+-            name = 'alias'
+-        else:
+-            name = 'anchor'
+-        self.reader.forward()
+-        length = 0
+-        ch = srp(length)
+-        # while u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z' \
+-        #         or ch in u'-_':
+-        while check_anchorname_char(ch):
+-            length += 1
+-            ch = srp(length)
+-        if not length:
+-            raise ScannerError(
+-                'while scanning an %s' % (name,),
+-                start_mark,
+-                'expected alphabetic or numeric character, but found %r' % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        value = self.reader.prefix(length)
+-        self.reader.forward(length)
+-        # ch1 = ch
+-        # ch = srp()   # no need to peek, ch is already set
+-        # assert ch1 == ch
+-        if ch not in '\0 \t\r\n\x85\u2028\u2029?:,[]{}%@`':
+-            raise ScannerError(
+-                'while scanning an %s' % (name,),
+-                start_mark,
+-                'expected alphabetic or numeric character, but found %r' % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        end_mark = self.reader.get_mark()
+-        return TokenClass(value, start_mark, end_mark)
+-
+-    def scan_tag(self):
+-        # type: () -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        start_mark = self.reader.get_mark()
+-        ch = srp(1)
+-        if ch == '<':
+-            handle = None
+-            self.reader.forward(2)
+-            suffix = self.scan_tag_uri('tag', start_mark)
+-            if srp() != '>':
+-                raise ScannerError(
+-                    'while parsing a tag',
+-                    start_mark,
+-                    "expected '>', but found %r" % utf8(srp()),
+-                    self.reader.get_mark(),
+-                )
+-            self.reader.forward()
+-        elif ch in _THE_END_SPACE_TAB:
+-            handle = None
+-            suffix = '!'
+-            self.reader.forward()
+-        else:
+-            length = 1
+-            use_handle = False
+-            while ch not in '\0 \r\n\x85\u2028\u2029':
+-                if ch == '!':
+-                    use_handle = True
+-                    break
+-                length += 1
+-                ch = srp(length)
+-            handle = '!'
+-            if use_handle:
+-                handle = self.scan_tag_handle('tag', start_mark)
+-            else:
+-                handle = '!'
+-                self.reader.forward()
+-            suffix = self.scan_tag_uri('tag', start_mark)
+-        ch = srp()
+-        if ch not in '\0 \r\n\x85\u2028\u2029':
+-            raise ScannerError(
+-                'while scanning a tag',
+-                start_mark,
+-                "expected ' ', but found %r" % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        value = (handle, suffix)
+-        end_mark = self.reader.get_mark()
+-        return TagToken(value, start_mark, end_mark)
+-
+-    def scan_block_scalar(self, style, rt=False):
+-        # type: (Any, Optional[bool]) -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        if style == '>':
+-            folded = True
+-        else:
+-            folded = False
+-
+-        chunks = []  # type: List[Any]
+-        start_mark = self.reader.get_mark()
+-
+-        # Scan the header.
+-        self.reader.forward()
+-        chomping, increment = self.scan_block_scalar_indicators(start_mark)
+-        # block scalar comment e.g. : |+  # comment text
+-        block_scalar_comment = self.scan_block_scalar_ignored_line(start_mark)
+-
+-        # Determine the indentation level and go to the first non-empty line.
+-        min_indent = self.indent + 1
+-        if increment is None:
+-            # no increment and top level, min_indent could be 0
+-            if min_indent < 1 and (
+-                style not in '|>'
+-                or (self.scanner_processing_version == (1, 1))
+-                and getattr(
+-                    self.loader, 'top_level_block_style_scalar_no_indent_error_1_1', False
+-                )
+-            ):
+-                min_indent = 1
+-            breaks, max_indent, end_mark = self.scan_block_scalar_indentation()
+-            indent = max(min_indent, max_indent)
+-        else:
+-            if min_indent < 1:
+-                min_indent = 1
+-            indent = min_indent + increment - 1
+-            breaks, end_mark = self.scan_block_scalar_breaks(indent)
+-        line_break = ""
+-
+-        # Scan the inner part of the block scalar.
+-        while self.reader.column == indent and srp() != '\0':
+-            chunks.extend(breaks)
+-            leading_non_space = srp() not in ' \t'
+-            length = 0
+-            while srp(length) not in _THE_END:
+-                length += 1
+-            chunks.append(self.reader.prefix(length))
+-            self.reader.forward(length)
+-            line_break = self.scan_line_break()
+-            breaks, end_mark = self.scan_block_scalar_breaks(indent)
+-            if style in '|>' and min_indent == 0:
+-                # at the beginning of a line, if in block style see if
+-                # end of document/start_new_document
+-                if self.check_document_start() or self.check_document_end():
+-                    break
+-            if self.reader.column == indent and srp() != '\0':
+-
+-                # Unfortunately, folding rules are ambiguous.
+-                #
+-                # This is the folding according to the specification:
+-
+-                if rt and folded and line_break == '\n':
+-                    chunks.append('\a')
+-                if folded and line_break == '\n' and leading_non_space and srp() not in ' \t':
+-                    if not breaks:
+-                        chunks.append(' ')
+-                else:
+-                    chunks.append(line_break)
+-
+-                # This is Clark Evans's interpretation (also in the spec
+-                # examples):
+-                #
+-                # if folded and line_break == u'\n':
+-                #     if not breaks:
+-                #         if srp() not in ' \t':
+-                #             chunks.append(u' ')
+-                #         else:
+-                #             chunks.append(line_break)
+-                # else:
+-                #     chunks.append(line_break)
+-            else:
+-                break
+-
+-        # Process trailing line breaks. The 'chomping' setting determines
+-        # whether they are included in the value.
+-        trailing = []  # type: List[Any]
+-        if chomping in [None, True]:
+-            chunks.append(line_break)
+-        if chomping is True:
+-            chunks.extend(breaks)
+-        elif chomping in [None, False]:
+-            trailing.extend(breaks)
+-
+-        # We are done.
+-        token = ScalarToken("".join(chunks), False, start_mark, end_mark, style)
+-        if block_scalar_comment is not None:
+-            token.add_pre_comments([block_scalar_comment])
+-        if len(trailing) > 0:
+-            # nprint('trailing 1', trailing)  # XXXXX
+-            # Eat whitespaces and comments until we reach the next token.
+-            comment = self.scan_to_next_token()
+-            while comment:
+-                trailing.append(' ' * comment[1].column + comment[0])
+-                comment = self.scan_to_next_token()
+-
+-            # Keep track of the trailing whitespace and following comments
+-            # as a comment token, if isn't all included in the actual value.
+-            comment_end_mark = self.reader.get_mark()
+-            comment = CommentToken("".join(trailing), end_mark, comment_end_mark)
+-            token.add_post_comment(comment)
+-        return token
+-
+-    def scan_block_scalar_indicators(self, start_mark):
+-        # type: (Any) -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        chomping = None
+-        increment = None
+-        ch = srp()
+-        if ch in '+-':
+-            if ch == '+':
+-                chomping = True
+-            else:
+-                chomping = False
+-            self.reader.forward()
+-            ch = srp()
+-            if ch in '0123456789':
+-                increment = int(ch)
+-                if increment == 0:
+-                    raise ScannerError(
+-                        'while scanning a block scalar',
+-                        start_mark,
+-                        'expected indentation indicator in the range 1-9, ' 'but found 0',
+-                        self.reader.get_mark(),
+-                    )
+-                self.reader.forward()
+-        elif ch in '0123456789':
+-            increment = int(ch)
+-            if increment == 0:
+-                raise ScannerError(
+-                    'while scanning a block scalar',
+-                    start_mark,
+-                    'expected indentation indicator in the range 1-9, ' 'but found 0',
+-                    self.reader.get_mark(),
+-                )
+-            self.reader.forward()
+-            ch = srp()
+-            if ch in '+-':
+-                if ch == '+':
+-                    chomping = True
+-                else:
+-                    chomping = False
+-                self.reader.forward()
+-        ch = srp()
+-        if ch not in '\0 \r\n\x85\u2028\u2029':
+-            raise ScannerError(
+-                'while scanning a block scalar',
+-                start_mark,
+-                'expected chomping or indentation indicators, but found %r' % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        return chomping, increment
+-
+-    def scan_block_scalar_ignored_line(self, start_mark):
+-        # type: (Any) -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        prefix = ''
+-        comment = None
+-        while srp() == ' ':
+-            prefix += srp()
+-            srf()
+-        if srp() == '#':
+-            comment = prefix
+-            while srp() not in _THE_END:
+-                comment += srp()
+-                srf()
+-        ch = srp()
+-        if ch not in _THE_END:
+-            raise ScannerError(
+-                'while scanning a block scalar',
+-                start_mark,
+-                'expected a comment or a line break, but found %r' % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        self.scan_line_break()
+-        return comment
+-
+-    def scan_block_scalar_indentation(self):
+-        # type: () -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        chunks = []
+-        max_indent = 0
+-        end_mark = self.reader.get_mark()
+-        while srp() in ' \r\n\x85\u2028\u2029':
+-            if srp() != ' ':
+-                chunks.append(self.scan_line_break())
+-                end_mark = self.reader.get_mark()
+-            else:
+-                srf()
+-                if self.reader.column > max_indent:
+-                    max_indent = self.reader.column
+-        return chunks, max_indent, end_mark
+-
+-    def scan_block_scalar_breaks(self, indent):
+-        # type: (int) -> Any
+-        # See the specification for details.
+-        chunks = []
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        end_mark = self.reader.get_mark()
+-        while self.reader.column < indent and srp() == ' ':
+-            srf()
+-        while srp() in '\r\n\x85\u2028\u2029':
+-            chunks.append(self.scan_line_break())
+-            end_mark = self.reader.get_mark()
+-            while self.reader.column < indent and srp() == ' ':
+-                srf()
+-        return chunks, end_mark
+-
+-    def scan_flow_scalar(self, style):
+-        # type: (Any) -> Any
+-        # See the specification for details.
+-        # Note that we loose indentation rules for quoted scalars. Quoted
+-        # scalars don't need to adhere indentation because " and ' clearly
+-        # mark the beginning and the end of them. Therefore we are less
+-        # restrictive then the specification requires. We only need to check
+-        # that document separators are not included in scalars.
+-        if style == '"':
+-            double = True
+-        else:
+-            double = False
+-        srp = self.reader.peek
+-        chunks = []  # type: List[Any]
+-        start_mark = self.reader.get_mark()
+-        quote = srp()
+-        self.reader.forward()
+-        chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark))
+-        while srp() != quote:
+-            chunks.extend(self.scan_flow_scalar_spaces(double, start_mark))
+-            chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark))
+-        self.reader.forward()
+-        end_mark = self.reader.get_mark()
+-        return ScalarToken("".join(chunks), False, start_mark, end_mark, style)
+-
+-    ESCAPE_REPLACEMENTS = {
+-        '0': '\0',
+-        'a': '\x07',
+-        'b': '\x08',
+-        't': '\x09',
+-        '\t': '\x09',
+-        'n': '\x0A',
+-        'v': '\x0B',
+-        'f': '\x0C',
+-        'r': '\x0D',
+-        'e': '\x1B',
+-        ' ': '\x20',
+-        '"': '"',
+-        '/': '/',  # as per http://www.json.org/
+-        '\\': '\\',
+-        'N': '\x85',
+-        '_': '\xA0',
+-        'L': '\u2028',
+-        'P': '\u2029',
+-    }
+-
+-    ESCAPE_CODES = {'x': 2, 'u': 4, 'U': 8}
+-
+-    def scan_flow_scalar_non_spaces(self, double, start_mark):
+-        # type: (Any, Any) -> Any
+-        # See the specification for details.
+-        chunks = []  # type: List[Any]
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        while True:
+-            length = 0
+-            while srp(length) not in ' \n\'"\\\0\t\r\x85\u2028\u2029':
+-                length += 1
+-            if length != 0:
+-                chunks.append(self.reader.prefix(length))
+-                srf(length)
+-            ch = srp()
+-            if not double and ch == "'" and srp(1) == "'":
+-                chunks.append("'")
+-                srf(2)
+-            elif (double and ch == "'") or (not double and ch in '"\\'):
+-                chunks.append(ch)
+-                srf()
+-            elif double and ch == '\\':
+-                srf()
+-                ch = srp()
+-                if ch in self.ESCAPE_REPLACEMENTS:
+-                    chunks.append(self.ESCAPE_REPLACEMENTS[ch])
+-                    srf()
+-                elif ch in self.ESCAPE_CODES:
+-                    length = self.ESCAPE_CODES[ch]
+-                    srf()
+-                    for k in range(length):
+-                        if srp(k) not in '0123456789ABCDEFabcdef':
+-                            raise ScannerError(
+-                                'while scanning a double-quoted scalar',
+-                                start_mark,
+-                                'expected escape sequence of %d hexdecimal '
+-                                'numbers, but found %r' % (length, utf8(srp(k))),
+-                                self.reader.get_mark(),
+-                            )
+-                    code = int(self.reader.prefix(length), 16)
+-                    chunks.append(unichr(code))
+-                    srf(length)
+-                elif ch in '\n\r\x85\u2028\u2029':
+-                    self.scan_line_break()
+-                    chunks.extend(self.scan_flow_scalar_breaks(double, start_mark))
+-                else:
+-                    raise ScannerError(
+-                        'while scanning a double-quoted scalar',
+-                        start_mark,
+-                        'found unknown escape character %r' % utf8(ch),
+-                        self.reader.get_mark(),
+-                    )
+-            else:
+-                return chunks
+-
+-    def scan_flow_scalar_spaces(self, double, start_mark):
+-        # type: (Any, Any) -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        chunks = []
+-        length = 0
+-        while srp(length) in ' \t':
+-            length += 1
+-        whitespaces = self.reader.prefix(length)
+-        self.reader.forward(length)
+-        ch = srp()
+-        if ch == '\0':
+-            raise ScannerError(
+-                'while scanning a quoted scalar',
+-                start_mark,
+-                'found unexpected end of stream',
+-                self.reader.get_mark(),
+-            )
+-        elif ch in '\r\n\x85\u2028\u2029':
+-            line_break = self.scan_line_break()
+-            breaks = self.scan_flow_scalar_breaks(double, start_mark)
+-            if line_break != '\n':
+-                chunks.append(line_break)
+-            elif not breaks:
+-                chunks.append(' ')
+-            chunks.extend(breaks)
+-        else:
+-            chunks.append(whitespaces)
+-        return chunks
+-
+-    def scan_flow_scalar_breaks(self, double, start_mark):
+-        # type: (Any, Any) -> Any
+-        # See the specification for details.
+-        chunks = []  # type: List[Any]
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        while True:
+-            # Instead of checking indentation, we check for document
+-            # separators.
+-            prefix = self.reader.prefix(3)
+-            if (prefix == '---' or prefix == '...') and srp(3) in _THE_END_SPACE_TAB:
+-                raise ScannerError(
+-                    'while scanning a quoted scalar',
+-                    start_mark,
+-                    'found unexpected document separator',
+-                    self.reader.get_mark(),
+-                )
+-            while srp() in ' \t':
+-                srf()
+-            if srp() in '\r\n\x85\u2028\u2029':
+-                chunks.append(self.scan_line_break())
+-            else:
+-                return chunks
+-
+-    def scan_plain(self):
+-        # type: () -> Any
+-        # See the specification for details.
+-        # We add an additional restriction for the flow context:
+-        #   plain scalars in the flow context cannot contain ',', ': '  and '?'.
+-        # We also keep track of the `allow_simple_key` flag here.
+-        # Indentation rules are loosed for the flow context.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        chunks = []  # type: List[Any]
+-        start_mark = self.reader.get_mark()
+-        end_mark = start_mark
+-        indent = self.indent + 1
+-        # We allow zero indentation for scalars, but then we need to check for
+-        # document separators at the beginning of the line.
+-        # if indent == 0:
+-        #     indent = 1
+-        spaces = []  # type: List[Any]
+-        while True:
+-            length = 0
+-            if srp() == '#':
+-                break
+-            while True:
+-                ch = srp(length)
+-                if ch == ':' and srp(length + 1) not in _THE_END_SPACE_TAB:
+-                    pass
+-                elif ch == '?' and self.scanner_processing_version != (1, 1):
+-                    pass
+-                elif (
+-                    ch in _THE_END_SPACE_TAB
+-                    or (
+-                        not self.flow_level
+-                        and ch == ':'
+-                        and srp(length + 1) in _THE_END_SPACE_TAB
+-                    )
+-                    or (self.flow_level and ch in ',:?[]{}')
+-                ):
+-                    break
+-                length += 1
+-            # It's not clear what we should do with ':' in the flow context.
+-            if (
+-                self.flow_level
+-                and ch == ':'
+-                and srp(length + 1) not in '\0 \t\r\n\x85\u2028\u2029,[]{}'
+-            ):
+-                srf(length)
+-                raise ScannerError(
+-                    'while scanning a plain scalar',
+-                    start_mark,
+-                    "found unexpected ':'",
+-                    self.reader.get_mark(),
+-                    'Please check '
+-                    'http://pyyaml.org/wiki/YAMLColonInFlowContext '
+-                    'for details.',
+-                )
+-            if length == 0:
+-                break
+-            self.allow_simple_key = False
+-            chunks.extend(spaces)
+-            chunks.append(self.reader.prefix(length))
+-            srf(length)
+-            end_mark = self.reader.get_mark()
+-            spaces = self.scan_plain_spaces(indent, start_mark)
+-            if (
+-                not spaces
+-                or srp() == '#'
+-                or (not self.flow_level and self.reader.column < indent)
+-            ):
+-                break
+-
+-        token = ScalarToken("".join(chunks), True, start_mark, end_mark)
+-        if spaces and spaces[0] == '\n':
+-            # Create a comment token to preserve the trailing line breaks.
+-            comment = CommentToken("".join(spaces) + '\n', start_mark, end_mark)
+-            token.add_post_comment(comment)
+-        return token
+-
+-    def scan_plain_spaces(self, indent, start_mark):
+-        # type: (Any, Any) -> Any
+-        # See the specification for details.
+-        # The specification is really confusing about tabs in plain scalars.
+-        # We just forbid them completely. Do not use tabs in YAML!
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        chunks = []
+-        length = 0
+-        while srp(length) in ' ':
+-            length += 1
+-        whitespaces = self.reader.prefix(length)
+-        self.reader.forward(length)
+-        ch = srp()
+-        if ch in '\r\n\x85\u2028\u2029':
+-            line_break = self.scan_line_break()
+-            self.allow_simple_key = True
+-            prefix = self.reader.prefix(3)
+-            if (prefix == '---' or prefix == '...') and srp(3) in _THE_END_SPACE_TAB:
+-                return
+-            breaks = []
+-            while srp() in ' \r\n\x85\u2028\u2029':
+-                if srp() == ' ':
+-                    srf()
+-                else:
+-                    breaks.append(self.scan_line_break())
+-                    prefix = self.reader.prefix(3)
+-                    if (prefix == '---' or prefix == '...') and srp(3) in _THE_END_SPACE_TAB:
+-                        return
+-            if line_break != '\n':
+-                chunks.append(line_break)
+-            elif not breaks:
+-                chunks.append(' ')
+-            chunks.extend(breaks)
+-        elif whitespaces:
+-            chunks.append(whitespaces)
+-        return chunks
+-
+-    def scan_tag_handle(self, name, start_mark):
+-        # type: (Any, Any) -> Any
+-        # See the specification for details.
+-        # For some strange reasons, the specification does not allow '_' in
+-        # tag handles. I have allowed it anyway.
+-        srp = self.reader.peek
+-        ch = srp()
+-        if ch != '!':
+-            raise ScannerError(
+-                'while scanning a %s' % (name,),
+-                start_mark,
+-                "expected '!', but found %r" % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        length = 1
+-        ch = srp(length)
+-        if ch != ' ':
+-            while '0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z' or ch in '-_':
+-                length += 1
+-                ch = srp(length)
+-            if ch != '!':
+-                self.reader.forward(length)
+-                raise ScannerError(
+-                    'while scanning a %s' % (name,),
+-                    start_mark,
+-                    "expected '!', but found %r" % utf8(ch),
+-                    self.reader.get_mark(),
+-                )
+-            length += 1
+-        value = self.reader.prefix(length)
+-        self.reader.forward(length)
+-        return value
+-
+-    def scan_tag_uri(self, name, start_mark):
+-        # type: (Any, Any) -> Any
+-        # See the specification for details.
+-        # Note: we do not check if URI is well-formed.
+-        srp = self.reader.peek
+-        chunks = []
+-        length = 0
+-        ch = srp(length)
+-        while (
+-            '0' <= ch <= '9'
+-            or 'A' <= ch <= 'Z'
+-            or 'a' <= ch <= 'z'
+-            or ch in "-;/?:@&=+$,_.!~*'()[]%"
+-            or ((self.scanner_processing_version > (1, 1)) and ch == '#')
+-        ):
+-            if ch == '%':
+-                chunks.append(self.reader.prefix(length))
+-                self.reader.forward(length)
+-                length = 0
+-                chunks.append(self.scan_uri_escapes(name, start_mark))
+-            else:
+-                length += 1
+-            ch = srp(length)
+-        if length != 0:
+-            chunks.append(self.reader.prefix(length))
+-            self.reader.forward(length)
+-            length = 0
+-        if not chunks:
+-            raise ScannerError(
+-                'while parsing a %s' % (name,),
+-                start_mark,
+-                'expected URI, but found %r' % utf8(ch),
+-                self.reader.get_mark(),
+-            )
+-        return "".join(chunks)
+-
+-    def scan_uri_escapes(self, name, start_mark):
+-        # type: (Any, Any) -> Any
+-        # See the specification for details.
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        code_bytes = []  # type: List[Any]
+-        mark = self.reader.get_mark()
+-        while srp() == '%':
+-            srf()
+-            for k in range(2):
+-                if srp(k) not in '0123456789ABCDEFabcdef':
+-                    raise ScannerError(
+-                        'while scanning a %s' % (name,),
+-                        start_mark,
+-                        'expected URI escape sequence of 2 hexdecimal numbers,'
+-                        ' but found %r' % utf8(srp(k)),
+-                        self.reader.get_mark(),
+-                    )
+-            if PY3:
+-                code_bytes.append(int(self.reader.prefix(2), 16))
+-            else:
+-                code_bytes.append(chr(int(self.reader.prefix(2), 16)))
+-            srf(2)
+-        try:
+-            if PY3:
+-                value = bytes(code_bytes).decode('utf-8')
+-            else:
+-                value = unicode(b"".join(code_bytes), 'utf-8')
+-        except UnicodeDecodeError as exc:
+-            raise ScannerError('while scanning a %s' % (name,), start_mark, str(exc), mark)
+-        return value
+-
+-    def scan_line_break(self):
+-        # type: () -> Any
+-        # Transforms:
+-        #   '\r\n'      :   '\n'
+-        #   '\r'        :   '\n'
+-        #   '\n'        :   '\n'
+-        #   '\x85'      :   '\n'
+-        #   '\u2028'    :   '\u2028'
+-        #   '\u2029     :   '\u2029'
+-        #   default     :   ''
+-        ch = self.reader.peek()
+-        if ch in '\r\n\x85':
+-            if self.reader.prefix(2) == '\r\n':
+-                self.reader.forward(2)
+-            else:
+-                self.reader.forward()
+-            return '\n'
+-        elif ch in '\u2028\u2029':
+-            self.reader.forward()
+-            return ch
+-        return ""
+-
+-
+-class RoundTripScanner(Scanner):
+-    def check_token(self, *choices):
+-        # type: (Any) -> bool
+-        # Check if the next token is one of the given types.
+-        while self.need_more_tokens():
+-            self.fetch_more_tokens()
+-        self._gather_comments()
+-        if bool(self.tokens):
+-            if not choices:
+-                return True
+-            for choice in choices:
+-                if isinstance(self.tokens[0], choice):
+-                    return True
+-        return False
+-
+-    def peek_token(self):
+-        # type: () -> Any
+-        # Return the next token, but do not delete if from the queue.
+-        while self.need_more_tokens():
+-            self.fetch_more_tokens()
+-        self._gather_comments()
+-        if bool(self.tokens):
+-            return self.tokens[0]
+-        return None
+-
+-    def _gather_comments(self):
+-        # type: () -> Any
+-        """combine multiple comment lines"""
+-        comments = []  # type: List[Any]
+-        if not self.tokens:
+-            return comments
+-        if isinstance(self.tokens[0], CommentToken):
+-            comment = self.tokens.pop(0)
+-            self.tokens_taken += 1
+-            comments.append(comment)
+-        while self.need_more_tokens():
+-            self.fetch_more_tokens()
+-            if not self.tokens:
+-                return comments
+-            if isinstance(self.tokens[0], CommentToken):
+-                self.tokens_taken += 1
+-                comment = self.tokens.pop(0)
+-                # nprint('dropping2', comment)
+-                comments.append(comment)
+-        if len(comments) >= 1:
+-            self.tokens[0].add_pre_comments(comments)
+-        # pull in post comment on e.g. ':'
+-        if not self.done and len(self.tokens) < 2:
+-            self.fetch_more_tokens()
+-
+-    def get_token(self):
+-        # type: () -> Any
+-        # Return the next token.
+-        while self.need_more_tokens():
+-            self.fetch_more_tokens()
+-        self._gather_comments()
+-        if bool(self.tokens):
+-            # nprint('tk', self.tokens)
+-            # only add post comment to single line tokens:
+-            # scalar, value token. FlowXEndToken, otherwise
+-            # hidden streamtokens could get them (leave them and they will be
+-            # pre comments for the next map/seq
+-            if (
+-                len(self.tokens) > 1
+-                and isinstance(
+-                    self.tokens[0],
+-                    (ScalarToken, ValueToken, FlowSequenceEndToken, FlowMappingEndToken),
+-                )
+-                and isinstance(self.tokens[1], CommentToken)
+-                and self.tokens[0].end_mark.line == self.tokens[1].start_mark.line
+-            ):
+-                self.tokens_taken += 1
+-                c = self.tokens.pop(1)
+-                self.fetch_more_tokens()
+-                while len(self.tokens) > 1 and isinstance(self.tokens[1], CommentToken):
+-                    self.tokens_taken += 1
+-                    c1 = self.tokens.pop(1)
+-                    c.value = c.value + (' ' * c1.start_mark.column) + c1.value
+-                    self.fetch_more_tokens()
+-                self.tokens[0].add_post_comment(c)
+-            elif (
+-                len(self.tokens) > 1
+-                and isinstance(self.tokens[0], ScalarToken)
+-                and isinstance(self.tokens[1], CommentToken)
+-                and self.tokens[0].end_mark.line != self.tokens[1].start_mark.line
+-            ):
+-                self.tokens_taken += 1
+-                c = self.tokens.pop(1)
+-                c.value = (
+-                    '\n' * (c.start_mark.line - self.tokens[0].end_mark.line)
+-                    + (' ' * c.start_mark.column)
+-                    + c.value
+-                )
+-                self.tokens[0].add_post_comment(c)
+-                self.fetch_more_tokens()
+-                while len(self.tokens) > 1 and isinstance(self.tokens[1], CommentToken):
+-                    self.tokens_taken += 1
+-                    c1 = self.tokens.pop(1)
+-                    c.value = c.value + (' ' * c1.start_mark.column) + c1.value
+-                    self.fetch_more_tokens()
+-            self.tokens_taken += 1
+-            return self.tokens.pop(0)
+-        return None
+-
+-    def fetch_comment(self, comment):
+-        # type: (Any) -> None
+-        value, start_mark, end_mark = comment
+-        while value and value[-1] == ' ':
+-            # empty line within indented key context
+-            # no need to update end-mark, that is not used
+-            value = value[:-1]
+-        self.tokens.append(CommentToken(value, start_mark, end_mark))
+-
+-    # scanner
+-
+-    def scan_to_next_token(self):
+-        # type: () -> Any
+-        # We ignore spaces, line breaks and comments.
+-        # If we find a line break in the block context, we set the flag
+-        # `allow_simple_key` on.
+-        # The byte order mark is stripped if it's the first character in the
+-        # stream. We do not yet support BOM inside the stream as the
+-        # specification requires. Any such mark will be considered as a part
+-        # of the document.
+-        #
+-        # TODO: We need to make tab handling rules more sane. A good rule is
+-        #   Tabs cannot precede tokens
+-        #   BLOCK-SEQUENCE-START, BLOCK-MAPPING-START, BLOCK-END,
+-        #   KEY(block), VALUE(block), BLOCK-ENTRY
+-        # So the checking code is
+-        #   if <TAB>:
+-        #       self.allow_simple_keys = False
+-        # We also need to add the check for `allow_simple_keys == True` to
+-        # `unwind_indent` before issuing BLOCK-END.
+-        # Scanners for block, flow, and plain scalars need to be modified.
+-
+-        srp = self.reader.peek
+-        srf = self.reader.forward
+-        if self.reader.index == 0 and srp() == '\uFEFF':
+-            srf()
+-        found = False
+-        while not found:
+-            while srp() == ' ':
+-                srf()
+-            ch = srp()
+-            if ch == '#':
+-                start_mark = self.reader.get_mark()
+-                comment = ch
+-                srf()
+-                while ch not in _THE_END:
+-                    ch = srp()
+-                    if ch == '\0':  # don't gobble the end-of-stream character
+-                        # but add an explicit newline as "YAML processors should terminate
+-                        # the stream with an explicit line break
+-                        # https://yaml.org/spec/1.2/spec.html#id2780069
+-                        comment += '\n'
+-                        break
+-                    comment += ch
+-                    srf()
+-                # gather any blank lines following the comment too
+-                ch = self.scan_line_break()
+-                while len(ch) > 0:
+-                    comment += ch
+-                    ch = self.scan_line_break()
+-                end_mark = self.reader.get_mark()
+-                if not self.flow_level:
+-                    self.allow_simple_key = True
+-                return comment, start_mark, end_mark
+-            if bool(self.scan_line_break()):
+-                start_mark = self.reader.get_mark()
+-                if not self.flow_level:
+-                    self.allow_simple_key = True
+-                ch = srp()
+-                if ch == '\n':  # empty toplevel lines
+-                    start_mark = self.reader.get_mark()
+-                    comment = ""
+-                    while ch:
+-                        ch = self.scan_line_break(empty_line=True)
+-                        comment += ch
+-                    if srp() == '#':
+-                        # empty line followed by indented real comment
+-                        comment = comment.rsplit('\n', 1)[0] + '\n'
+-                    end_mark = self.reader.get_mark()
+-                    return comment, start_mark, end_mark
+-            else:
+-                found = True
+-        return None
+-
+-    def scan_line_break(self, empty_line=False):
+-        # type: (bool) -> Text
+-        # Transforms:
+-        #   '\r\n'      :   '\n'
+-        #   '\r'        :   '\n'
+-        #   '\n'        :   '\n'
+-        #   '\x85'      :   '\n'
+-        #   '\u2028'    :   '\u2028'
+-        #   '\u2029     :   '\u2029'
+-        #   default     :   ''
+-        ch = self.reader.peek()  # type: Text
+-        if ch in '\r\n\x85':
+-            if self.reader.prefix(2) == '\r\n':
+-                self.reader.forward(2)
+-            else:
+-                self.reader.forward()
+-            return '\n'
+-        elif ch in '\u2028\u2029':
+-            self.reader.forward()
+-            return ch
+-        elif empty_line and ch in '\t ':
+-            self.reader.forward()
+-            return ch
+-        return ""
+-
+-    def scan_block_scalar(self, style, rt=True):
+-        # type: (Any, Optional[bool]) -> Any
+-        return Scanner.scan_block_scalar(self, style, rt=rt)
+-
+-
+-# try:
+-#     import psyco
+-#     psyco.bind(Scanner)
+-# except ImportError:
+-#     pass
+diff --git a/dynaconf/vendor_src/ruamel/yaml/serializer.py b/dynaconf/vendor_src/ruamel/yaml/serializer.py
+deleted file mode 100644
+index 0a28c60..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/serializer.py
++++ /dev/null
+@@ -1,240 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import absolute_import
+-
+-from .error import YAMLError
+-from .compat import nprint, DBG_NODE, dbg, string_types, nprintf  # NOQA
+-from .util import RegExp
+-
+-from .events import (
+-    StreamStartEvent,
+-    StreamEndEvent,
+-    MappingStartEvent,
+-    MappingEndEvent,
+-    SequenceStartEvent,
+-    SequenceEndEvent,
+-    AliasEvent,
+-    ScalarEvent,
+-    DocumentStartEvent,
+-    DocumentEndEvent,
+-)
+-from .nodes import MappingNode, ScalarNode, SequenceNode
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Union, Text, Optional  # NOQA
+-    from .compat import VersionType  # NOQA
+-
+-__all__ = ['Serializer', 'SerializerError']
+-
+-
+-class SerializerError(YAMLError):
+-    pass
+-
+-
+-class Serializer(object):
+-
+-    # 'id' and 3+ numbers, but not 000
+-    ANCHOR_TEMPLATE = u'id%03d'
+-    ANCHOR_RE = RegExp(u'id(?!000$)\\d{3,}')
+-
+-    def __init__(
+-        self,
+-        encoding=None,
+-        explicit_start=None,
+-        explicit_end=None,
+-        version=None,
+-        tags=None,
+-        dumper=None,
+-    ):
+-        # type: (Any, Optional[bool], Optional[bool], Optional[VersionType], Any, Any) -> None  # NOQA
+-        self.dumper = dumper
+-        if self.dumper is not None:
+-            self.dumper._serializer = self
+-        self.use_encoding = encoding
+-        self.use_explicit_start = explicit_start
+-        self.use_explicit_end = explicit_end
+-        if isinstance(version, string_types):
+-            self.use_version = tuple(map(int, version.split('.')))
+-        else:
+-            self.use_version = version  # type: ignore
+-        self.use_tags = tags
+-        self.serialized_nodes = {}  # type: Dict[Any, Any]
+-        self.anchors = {}  # type: Dict[Any, Any]
+-        self.last_anchor_id = 0
+-        self.closed = None  # type: Optional[bool]
+-        self._templated_id = None
+-
+-    @property
+-    def emitter(self):
+-        # type: () -> Any
+-        if hasattr(self.dumper, 'typ'):
+-            return self.dumper.emitter
+-        return self.dumper._emitter
+-
+-    @property
+-    def resolver(self):
+-        # type: () -> Any
+-        if hasattr(self.dumper, 'typ'):
+-            self.dumper.resolver
+-        return self.dumper._resolver
+-
+-    def open(self):
+-        # type: () -> None
+-        if self.closed is None:
+-            self.emitter.emit(StreamStartEvent(encoding=self.use_encoding))
+-            self.closed = False
+-        elif self.closed:
+-            raise SerializerError('serializer is closed')
+-        else:
+-            raise SerializerError('serializer is already opened')
+-
+-    def close(self):
+-        # type: () -> None
+-        if self.closed is None:
+-            raise SerializerError('serializer is not opened')
+-        elif not self.closed:
+-            self.emitter.emit(StreamEndEvent())
+-            self.closed = True
+-
+-    # def __del__(self):
+-    #     self.close()
+-
+-    def serialize(self, node):
+-        # type: (Any) -> None
+-        if dbg(DBG_NODE):
+-            nprint('Serializing nodes')
+-            node.dump()
+-        if self.closed is None:
+-            raise SerializerError('serializer is not opened')
+-        elif self.closed:
+-            raise SerializerError('serializer is closed')
+-        self.emitter.emit(
+-            DocumentStartEvent(
+-                explicit=self.use_explicit_start, version=self.use_version, tags=self.use_tags
+-            )
+-        )
+-        self.anchor_node(node)
+-        self.serialize_node(node, None, None)
+-        self.emitter.emit(DocumentEndEvent(explicit=self.use_explicit_end))
+-        self.serialized_nodes = {}
+-        self.anchors = {}
+-        self.last_anchor_id = 0
+-
+-    def anchor_node(self, node):
+-        # type: (Any) -> None
+-        if node in self.anchors:
+-            if self.anchors[node] is None:
+-                self.anchors[node] = self.generate_anchor(node)
+-        else:
+-            anchor = None
+-            try:
+-                if node.anchor.always_dump:
+-                    anchor = node.anchor.value
+-            except:  # NOQA
+-                pass
+-            self.anchors[node] = anchor
+-            if isinstance(node, SequenceNode):
+-                for item in node.value:
+-                    self.anchor_node(item)
+-            elif isinstance(node, MappingNode):
+-                for key, value in node.value:
+-                    self.anchor_node(key)
+-                    self.anchor_node(value)
+-
+-    def generate_anchor(self, node):
+-        # type: (Any) -> Any
+-        try:
+-            anchor = node.anchor.value
+-        except:  # NOQA
+-            anchor = None
+-        if anchor is None:
+-            self.last_anchor_id += 1
+-            return self.ANCHOR_TEMPLATE % self.last_anchor_id
+-        return anchor
+-
+-    def serialize_node(self, node, parent, index):
+-        # type: (Any, Any, Any) -> None
+-        alias = self.anchors[node]
+-        if node in self.serialized_nodes:
+-            self.emitter.emit(AliasEvent(alias))
+-        else:
+-            self.serialized_nodes[node] = True
+-            self.resolver.descend_resolver(parent, index)
+-            if isinstance(node, ScalarNode):
+-                # here check if the node.tag equals the one that would result from parsing
+-                # if not equal quoting is necessary for strings
+-                detected_tag = self.resolver.resolve(ScalarNode, node.value, (True, False))
+-                default_tag = self.resolver.resolve(ScalarNode, node.value, (False, True))
+-                implicit = (
+-                    (node.tag == detected_tag),
+-                    (node.tag == default_tag),
+-                    node.tag.startswith('tag:yaml.org,2002:'),
+-                )
+-                self.emitter.emit(
+-                    ScalarEvent(
+-                        alias,
+-                        node.tag,
+-                        implicit,
+-                        node.value,
+-                        style=node.style,
+-                        comment=node.comment,
+-                    )
+-                )
+-            elif isinstance(node, SequenceNode):
+-                implicit = node.tag == self.resolver.resolve(SequenceNode, node.value, True)
+-                comment = node.comment
+-                end_comment = None
+-                seq_comment = None
+-                if node.flow_style is True:
+-                    if comment:  # eol comment on flow style sequence
+-                        seq_comment = comment[0]
+-                        # comment[0] = None
+-                if comment and len(comment) > 2:
+-                    end_comment = comment[2]
+-                else:
+-                    end_comment = None
+-                self.emitter.emit(
+-                    SequenceStartEvent(
+-                        alias,
+-                        node.tag,
+-                        implicit,
+-                        flow_style=node.flow_style,
+-                        comment=node.comment,
+-                    )
+-                )
+-                index = 0
+-                for item in node.value:
+-                    self.serialize_node(item, node, index)
+-                    index += 1
+-                self.emitter.emit(SequenceEndEvent(comment=[seq_comment, end_comment]))
+-            elif isinstance(node, MappingNode):
+-                implicit = node.tag == self.resolver.resolve(MappingNode, node.value, True)
+-                comment = node.comment
+-                end_comment = None
+-                map_comment = None
+-                if node.flow_style is True:
+-                    if comment:  # eol comment on flow style sequence
+-                        map_comment = comment[0]
+-                        # comment[0] = None
+-                if comment and len(comment) > 2:
+-                    end_comment = comment[2]
+-                self.emitter.emit(
+-                    MappingStartEvent(
+-                        alias,
+-                        node.tag,
+-                        implicit,
+-                        flow_style=node.flow_style,
+-                        comment=node.comment,
+-                        nr_items=len(node.value),
+-                    )
+-                )
+-                for key, value in node.value:
+-                    self.serialize_node(key, node, None)
+-                    self.serialize_node(value, node, key)
+-                self.emitter.emit(MappingEndEvent(comment=[map_comment, end_comment]))
+-            self.resolver.ascend_resolver()
+-
+-
+-def templated_id(s):
+-    # type: (Text) -> Any
+-    return Serializer.ANCHOR_RE.match(s)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/setup.cfg b/dynaconf/vendor_src/ruamel/yaml/setup.cfg
+deleted file mode 100644
+index 8bfd5a1..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/setup.cfg
++++ /dev/null
+@@ -1,4 +0,0 @@
+-[egg_info]
+-tag_build = 
+-tag_date = 0
+-
+diff --git a/dynaconf/vendor_src/ruamel/yaml/setup.py b/dynaconf/vendor_src/ruamel/yaml/setup.py
+deleted file mode 100644
+index f22dceb..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/setup.py
++++ /dev/null
+@@ -1,962 +0,0 @@
+-# # header
+-# coding: utf-8
+-# dd: 20200125
+-
+-from __future__ import print_function, absolute_import, division, unicode_literals
+-
+-# # __init__.py parser
+-
+-import sys
+-import os
+-import datetime
+-import traceback
+-
+-sys.path = [path for path in sys.path if path not in [os.getcwd(), ""]]
+-import platform  # NOQA
+-from _ast import *  # NOQA
+-from ast import parse  # NOQA
+-
+-from setuptools import setup, Extension, Distribution  # NOQA
+-from setuptools.command import install_lib  # NOQA
+-from setuptools.command.sdist import sdist as _sdist  # NOQA
+-
+-try:
+-    from setuptools.namespaces import Installer as NameSpaceInstaller # NOQA
+-except ImportError:
+-    msg = ('You should use the latest setuptools. The namespaces.py file that this setup.py'
+-           ' uses was added in setuptools 28.7.0 (Oct 2016)')
+-    print(msg)
+-    sys.exit()
+-
+-if __name__ != '__main__':
+-    raise NotImplementedError('should never include setup.py')
+-
+-# # definitions
+-
+-full_package_name = None
+-
+-if sys.version_info < (3,):
+-    string_type = basestring
+-else:
+-    string_type = str
+-
+-
+-if sys.version_info < (3, 4):
+-
+-    class Bytes:
+-        pass
+-
+-    class NameConstant:
+-        pass
+-
+-
+-if sys.version_info >= (3, 8):
+-    from ast import Str, Num, Bytes, NameConstant  # NOQA
+-
+-
+-if sys.version_info < (3,):
+-    open_kw = dict()
+-else:
+-    open_kw = dict(encoding='utf-8')
+-
+-
+-if sys.version_info < (2, 7) or platform.python_implementation() == 'Jython':
+-
+-    class Set:
+-        pass
+-
+-
+-if os.environ.get('DVDEBUG', "") == "":
+-
+-    def debug(*args, **kw):
+-        pass
+-
+-
+-else:
+-
+-    def debug(*args, **kw):
+-        with open(os.environ['DVDEBUG'], 'a') as fp:
+-            kw1 = kw.copy()
+-            kw1['file'] = fp
+-            print('{:%Y-%d-%mT%H:%M:%S}'.format(datetime.datetime.now()), file=fp, end=' ')
+-            print(*args, **kw1)
+-
+-
+-def literal_eval(node_or_string):
+-    """
+-    Safely evaluate an expression node or a string containing a Python
+-    expression.  The string or node provided may only consist of the following
+-    Python literal structures: strings, bytes, numbers, tuples, lists, dicts,
+-    sets, booleans, and None.
+-
+-    Even when passing in Unicode, the resulting Str types parsed are 'str' in Python 2.
+-    I don't now how to set 'unicode_literals' on parse -> Str is explicitly converted.
+-    """
+-    _safe_names = {'None': None, 'True': True, 'False': False}
+-    if isinstance(node_or_string, string_type):
+-        node_or_string = parse(node_or_string, mode='eval')
+-    if isinstance(node_or_string, Expression):
+-        node_or_string = node_or_string.body
+-    else:
+-        raise TypeError('only string or AST nodes supported')
+-
+-    def _convert(node):
+-        if isinstance(node, Str):
+-            if sys.version_info < (3,) and not isinstance(node.s, unicode):
+-                return node.s.decode('utf-8')
+-            return node.s
+-        elif isinstance(node, Bytes):
+-            return node.s
+-        elif isinstance(node, Num):
+-            return node.n
+-        elif isinstance(node, Tuple):
+-            return tuple(map(_convert, node.elts))
+-        elif isinstance(node, List):
+-            return list(map(_convert, node.elts))
+-        elif isinstance(node, Set):
+-            return set(map(_convert, node.elts))
+-        elif isinstance(node, Dict):
+-            return dict((_convert(k), _convert(v)) for k, v in zip(node.keys, node.values))
+-        elif isinstance(node, NameConstant):
+-            return node.value
+-        elif sys.version_info < (3, 4) and isinstance(node, Name):
+-            if node.id in _safe_names:
+-                return _safe_names[node.id]
+-        elif (
+-            isinstance(node, UnaryOp)
+-            and isinstance(node.op, (UAdd, USub))
+-            and isinstance(node.operand, (Num, UnaryOp, BinOp))
+-        ):  # NOQA
+-            operand = _convert(node.operand)
+-            if isinstance(node.op, UAdd):
+-                return +operand
+-            else:
+-                return -operand
+-        elif (
+-            isinstance(node, BinOp)
+-            and isinstance(node.op, (Add, Sub))
+-            and isinstance(node.right, (Num, UnaryOp, BinOp))
+-            and isinstance(node.left, (Num, UnaryOp, BinOp))
+-        ):  # NOQA
+-            left = _convert(node.left)
+-            right = _convert(node.right)
+-            if isinstance(node.op, Add):
+-                return left + right
+-            else:
+-                return left - right
+-        elif isinstance(node, Call):
+-            func_id = getattr(node.func, 'id', None)
+-            if func_id == 'dict':
+-                return dict((k.arg, _convert(k.value)) for k in node.keywords)
+-            elif func_id == 'set':
+-                return set(_convert(node.args[0]))
+-            elif func_id == 'date':
+-                return datetime.date(*[_convert(k) for k in node.args])
+-            elif func_id == 'datetime':
+-                return datetime.datetime(*[_convert(k) for k in node.args])
+-        err = SyntaxError('malformed node or string: ' + repr(node))
+-        err.filename = '<string>'
+-        err.lineno = node.lineno
+-        err.offset = node.col_offset
+-        err.text = repr(node)
+-        err.node = node
+-        raise err
+-
+-    return _convert(node_or_string)
+-
+-
+-# parses python ( "= dict( )" ) or ( "= {" )
+-def _package_data(fn):
+-    data = {}
+-    with open(fn, **open_kw) as fp:
+-        parsing = False
+-        lines = []
+-        for line in fp.readlines():
+-            if sys.version_info < (3,):
+-                line = line.decode('utf-8')
+-            if line.startswith('_package_data'):
+-                if 'dict(' in line:
+-                    parsing = 'python'
+-                    lines.append('dict(\n')
+-                elif line.endswith('= {\n'):
+-                    parsing = 'python'
+-                    lines.append('{\n')
+-                else:
+-                    raise NotImplementedError
+-                continue
+-            if not parsing:
+-                continue
+-            if parsing == 'python':
+-                if line.startswith(')') or line.startswith('}'):
+-                    lines.append(line)
+-                    try:
+-                        data = literal_eval("".join(lines))
+-                    except SyntaxError as e:
+-                        context = 2
+-                        from_line = e.lineno - (context + 1)
+-                        to_line = e.lineno + (context - 1)
+-                        w = len(str(to_line))
+-                        for index, line in enumerate(lines):
+-                            if from_line <= index <= to_line:
+-                                print(
+-                                    '{0:{1}}: {2}'.format(index, w, line).encode('utf-8'),
+-                                    end="",
+-                                )
+-                                if index == e.lineno - 1:
+-                                    print(
+-                                        '{0:{1}}  {2}^--- {3}'.format(
+-                                            ' ', w, ' ' * e.offset, e.node
+-                                        )
+-                                    )
+-                        raise
+-                    break
+-                lines.append(line)
+-            else:
+-                raise NotImplementedError
+-    return data
+-
+-
+-# make sure you can run "python ../some/dir/setup.py install"
+-pkg_data = _package_data(__file__.replace('setup.py', '__init__.py'))
+-
+-exclude_files = ['setup.py']
+-
+-
+-# # helper
+-def _check_convert_version(tup):
+-    """Create a PEP 386 pseudo-format conformant string from tuple tup."""
+-    ret_val = str(tup[0])  # first is always digit
+-    next_sep = '.'  # separator for next extension, can be "" or "."
+-    nr_digits = 0  # nr of adjacent digits in rest, to verify
+-    post_dev = False  # are we processig post/dev
+-    for x in tup[1:]:
+-        if isinstance(x, int):
+-            nr_digits += 1
+-            if nr_digits > 2:
+-                raise ValueError('too many consecutive digits after ' + ret_val)
+-            ret_val += next_sep + str(x)
+-            next_sep = '.'
+-            continue
+-        first_letter = x[0].lower()
+-        next_sep = ""
+-        if first_letter in 'abcr':
+-            if post_dev:
+-                raise ValueError('release level specified after ' 'post/dev: ' + x)
+-            nr_digits = 0
+-            ret_val += 'rc' if first_letter == 'r' else first_letter
+-        elif first_letter in 'pd':
+-            nr_digits = 1  # only one can follow
+-            post_dev = True
+-            ret_val += '.post' if first_letter == 'p' else '.dev'
+-        else:
+-            raise ValueError('First letter of "' + x + '" not recognised')
+-    # .dev and .post need a number otherwise setuptools normalizes and complains
+-    if nr_digits == 1 and post_dev:
+-        ret_val += '0'
+-    return ret_val
+-
+-
+-version_info = pkg_data['version_info']
+-version_str = _check_convert_version(version_info)
+-
+-
+-class MyInstallLib(install_lib.install_lib):
+-    def install(self):
+-        fpp = pkg_data['full_package_name'].split('.')  # full package path
+-        full_exclude_files = [os.path.join(*(fpp + [x])) for x in exclude_files]
+-        alt_files = []
+-        outfiles = install_lib.install_lib.install(self)
+-        for x in outfiles:
+-            for full_exclude_file in full_exclude_files:
+-                if full_exclude_file in x:
+-                    os.remove(x)
+-                    break
+-            else:
+-                alt_files.append(x)
+-        return alt_files
+-
+-
+-class MySdist(_sdist):
+-    def initialize_options(self):
+-        _sdist.initialize_options(self)
+-        # see pep 527, new uploads should be tar.gz or .zip
+-        # fmt = getattr(self, 'tarfmt',  None)
+-        # because of unicode_literals
+-        # self.formats = fmt if fmt else [b'bztar'] if sys.version_info < (3, ) else ['bztar']
+-        dist_base = os.environ.get('PYDISTBASE')
+-        fpn = getattr(getattr(self, 'nsp', self), 'full_package_name', None)
+-        if fpn and dist_base:
+-            print('setting  distdir {}/{}'.format(dist_base, fpn))
+-            self.dist_dir = os.path.join(dist_base, fpn)
+-
+-
+-# try except so this doesn't bomb when you don't have wheel installed, implies
+-# generation of wheels in ./dist
+-try:
+-    from wheel.bdist_wheel import bdist_wheel as _bdist_wheel  # NOQA
+-
+-    class MyBdistWheel(_bdist_wheel):
+-        def initialize_options(self):
+-            _bdist_wheel.initialize_options(self)
+-            dist_base = os.environ.get('PYDISTBASE')
+-            fpn = getattr(getattr(self, 'nsp', self), 'full_package_name', None)
+-            if fpn and dist_base:
+-                print('setting  distdir {}/{}'.format(dist_base, fpn))
+-                self.dist_dir = os.path.join(dist_base, fpn)
+-
+-    _bdist_wheel_available = True
+-
+-except ImportError:
+-    _bdist_wheel_available = False
+-
+-
+-class NameSpacePackager(object):
+-    def __init__(self, pkg_data):
+-        assert isinstance(pkg_data, dict)
+-        self._pkg_data = pkg_data
+-        self.full_package_name = self.pn(self._pkg_data['full_package_name'])
+-        self._split = None
+-        self.depth = self.full_package_name.count('.')
+-        self.nested = self._pkg_data.get('nested', False)
+-        if self.nested:
+-            NameSpaceInstaller.install_namespaces = lambda x: None
+-        self.command = None
+-        self.python_version()
+-        self._pkg = [None, None]  # required and pre-installable packages
+-        if (
+-            sys.argv[0] == 'setup.py'
+-            and sys.argv[1] == 'install'
+-            and '--single-version-externally-managed' not in sys.argv
+-        ):
+-            if os.environ.get('READTHEDOCS', None) == 'True':
+-                os.system('pip install .')
+-                sys.exit(0)
+-            if not os.environ.get('RUAMEL_NO_PIP_INSTALL_CHECK', False):
+-                print('error: you have to install with "pip install ."')
+-                sys.exit(1)
+-        # If you only support an extension module on Linux, Windows thinks it
+-        # is pure. That way you would get pure python .whl files that take
+-        # precedence for downloading on Linux over source with compilable C code
+-        if self._pkg_data.get('universal'):
+-            Distribution.is_pure = lambda *args: True
+-        else:
+-            Distribution.is_pure = lambda *args: False
+-        for x in sys.argv:
+-            if x[0] == '-' or x == 'setup.py':
+-                continue
+-            self.command = x
+-            break
+-
+-    def pn(self, s):
+-        if sys.version_info < (3,) and isinstance(s, unicode):
+-            return s.encode('utf-8')
+-        return s
+-
+-    @property
+-    def split(self):
+-        """split the full package name in list of compontents traditionally
+-        done by setuptools.find_packages. This routine skips any directories
+-        with __init__.py, for which the name starts with "_" or ".", or contain a
+-        setup.py/tox.ini (indicating a subpackage)
+-        """
+-        skip = []
+-        if self._split is None:
+-            fpn = self.full_package_name.split('.')
+-            self._split = []
+-            while fpn:
+-                self._split.insert(0, '.'.join(fpn))
+-                fpn = fpn[:-1]
+-            for d in sorted(os.listdir('.')):
+-                if not os.path.isdir(d) or d == self._split[0] or d[0] in '._':
+-                    continue
+-                # prevent sub-packages in namespace from being included
+-                x = os.path.join(d, '__init__.py')
+-                if os.path.exists(x):
+-                    pd = _package_data(x)
+-                    if pd.get('nested', False):
+-                        skip.append(d)
+-                        continue
+-                    self._split.append(self.full_package_name + '.' + d)
+-            if sys.version_info < (3,):
+-                self._split = [
+-                    (y.encode('utf-8') if isinstance(y, unicode) else y) for y in self._split
+-                ]
+-        if skip:
+-            # this interferes with output checking
+-            # print('skipping sub-packages:', ', '.join(skip))
+-            pass
+-        return self._split
+-
+-    @property
+-    def namespace_packages(self):
+-        return self.split[: self.depth]
+-
+-    def namespace_directories(self, depth=None):
+-        """return list of directories where the namespace should be created /
+-        can be found
+-        """
+-        res = []
+-        for index, d in enumerate(self.split[:depth]):
+-            # toplevel gets a dot
+-            if index > 0:
+-                d = os.path.join(*d.split('.'))
+-            res.append('.' + d)
+-        return res
+-
+-    @property
+-    def package_dir(self):
+-        d = {
+-            # don't specify empty dir, clashes with package_data spec
+-            self.full_package_name: '.'
+-        }
+-        if 'extra_packages' in self._pkg_data:
+-            return d
+-        if len(self.split) > 1:  # only if package namespace
+-            d[self.split[0]] = self.namespace_directories(1)[0]
+-        return d
+-
+-    def create_dirs(self):
+-        """create the directories necessary for namespace packaging"""
+-        directories = self.namespace_directories(self.depth)
+-        if not directories:
+-            return
+-        if not os.path.exists(directories[0]):
+-            for d in directories:
+-                os.mkdir(d)
+-                with open(os.path.join(d, '__init__.py'), 'w') as fp:
+-                    fp.write(
+-                        'import pkg_resources\n' 'pkg_resources.declare_namespace(__name__)\n'
+-                    )
+-
+-    def python_version(self):
+-        supported = self._pkg_data.get('supported')
+-        if supported is None:
+-            return
+-        if len(supported) == 1:
+-            minimum = supported[0]
+-        else:
+-            for x in supported:
+-                if x[0] == sys.version_info[0]:
+-                    minimum = x
+-                    break
+-            else:
+-                return
+-        if sys.version_info < minimum:
+-            print('minimum python version(s): ' + str(supported))
+-            sys.exit(1)
+-
+-    def check(self):
+-        try:
+-            from pip.exceptions import InstallationError
+-        except ImportError:
+-            return
+-        # arg is either develop (pip install -e) or install
+-        if self.command not in ['install', 'develop']:
+-            return
+-
+-        # if hgi and hgi.base are both in namespace_packages matching
+-        # against the top (hgi.) it suffices to find minus-e and non-minus-e
+-        # installed packages. As we don't know the order in namespace_packages
+-        # do some magic
+-        prefix = self.split[0]
+-        prefixes = set([prefix, prefix.replace('_', '-')])
+-        for p in sys.path:
+-            if not p:
+-                continue  # directory with setup.py
+-            if os.path.exists(os.path.join(p, 'setup.py')):
+-                continue  # some linked in stuff might not be hgi based
+-            if not os.path.isdir(p):
+-                continue
+-            if p.startswith('/tmp/'):
+-                continue
+-            for fn in os.listdir(p):
+-                for pre in prefixes:
+-                    if fn.startswith(pre):
+-                        break
+-                else:
+-                    continue
+-                full_name = os.path.join(p, fn)
+-                # not in prefixes the toplevel is never changed from _ to -
+-                if fn == prefix and os.path.isdir(full_name):
+-                    # directory -> other, non-minus-e, install
+-                    if self.command == 'develop':
+-                        raise InstallationError(
+-                            'Cannot mix develop (pip install -e),\nwith '
+-                            'non-develop installs for package name {0}'.format(fn)
+-                        )
+-                elif fn == prefix:
+-                    raise InstallationError('non directory package {0} in {1}'.format(fn, p))
+-                for pre in [x + '.' for x in prefixes]:
+-                    if fn.startswith(pre):
+-                        break
+-                else:
+-                    continue  # hgiabc instead of hgi.
+-                if fn.endswith('-link') and self.command == 'install':
+-                    raise InstallationError(
+-                        'Cannot mix non-develop with develop\n(pip install -e)'
+-                        ' installs for package name {0}'.format(fn)
+-                    )
+-
+-    def entry_points(self, script_name=None, package_name=None):
+-        """normally called without explicit script_name and package name
+-        the default console_scripts entry depends on the existence of __main__.py:
+-        if that file exists then the function main() in there is used, otherwise
+-        the in __init__.py.
+-
+-        the _package_data entry_points key/value pair can be explicitly specified
+-        including a "=" character. If the entry is True or 1 the
+-        scriptname is the last part of the full package path (split on '.')
+-        if the ep entry is a simple string without "=", that is assumed to be
+-        the name of the script.
+-        """
+-
+-        def pckg_entry_point(name):
+-            return '{0}{1}:main'.format(
+-                name, '.__main__' if os.path.exists('__main__.py') else ""
+-            )
+-
+-        ep = self._pkg_data.get('entry_points', True)
+-        if isinstance(ep, dict):
+-            return ep
+-        if ep is None:
+-            return None
+-        if ep not in [True, 1]:
+-            if '=' in ep:
+-                # full specification of the entry point like
+-                # entry_points=['yaml = ruamel.yaml.cmd:main'],
+-                return {'console_scripts': [ep]}
+-            # assume that it is just the script name
+-            script_name = ep
+-        if package_name is None:
+-            package_name = self.full_package_name
+-        if not script_name:
+-            script_name = package_name.split('.')[-1]
+-        return {
+-            'console_scripts': [
+-                '{0} = {1}'.format(script_name, pckg_entry_point(package_name))
+-            ]
+-        }
+-
+-    @property
+-    def url(self):
+-        url = self._pkg_data.get('url')
+-        if url:
+-            return url
+-        sp = self.full_package_name
+-        for ch in '_.':
+-            sp = sp.replace(ch, '-')
+-        return 'https://sourceforge.net/p/{0}/code/ci/default/tree'.format(sp)
+-
+-    @property
+-    def author(self):
+-        return self._pkg_data['author']  # no get needs to be there
+-
+-    @property
+-    def author_email(self):
+-        return self._pkg_data['author_email']  # no get needs to be there
+-
+-    @property
+-    def license(self):
+-        """return the license field from _package_data, None means MIT"""
+-        lic = self._pkg_data.get('license')
+-        if lic is None:
+-            # lic_fn = os.path.join(os.path.dirname(__file__), 'LICENSE')
+-            # assert os.path.exists(lic_fn)
+-            return 'MIT license'
+-        return lic
+-
+-    def has_mit_lic(self):
+-        return 'MIT' in self.license
+-
+-    @property
+-    def description(self):
+-        return self._pkg_data['description']  # no get needs to be there
+-
+-    @property
+-    def status(self):
+-        # αβ
+-        status = self._pkg_data.get('status', 'β').lower()
+-        if status in ['α', 'alpha']:
+-            return (3, 'Alpha')
+-        elif status in ['β', 'beta']:
+-            return (4, 'Beta')
+-        elif 'stable' in status.lower():
+-            return (5, 'Production/Stable')
+-        raise NotImplementedError
+-
+-    @property
+-    def classifiers(self):
+-        """this needs more intelligence, probably splitting the classifiers from _pkg_data
+-        and only adding defaults when no explicit entries were provided.
+-        Add explicit Python versions in sync with tox.env generation based on python_requires?
+-        """
+-        attr = '_' + sys._getframe().f_code.co_name
+-        if not hasattr(self, attr):
+-            setattr(self, attr, self._setup_classifiers())
+-        return getattr(self, attr)
+-
+-    def _setup_classifiers(self):
+-        return sorted(
+-            set(
+-                [
+-                    'Development Status :: {0} - {1}'.format(*self.status),
+-                    'Intended Audience :: Developers',
+-                    'License :: '
+-                    + ('OSI Approved :: MIT' if self.has_mit_lic() else 'Other/Proprietary')
+-                    + ' License',
+-                    'Operating System :: OS Independent',
+-                    'Programming Language :: Python',
+-                ]
+-                + [self.pn(x) for x in self._pkg_data.get('classifiers', [])]
+-            )
+-        )
+-
+-    @property
+-    def keywords(self):
+-        return self.pn(self._pkg_data.get('keywords', []))
+-
+-    @property
+-    def install_requires(self):
+-        """list of packages required for installation"""
+-        return self._analyse_packages[0]
+-
+-    @property
+-    def install_pre(self):
+-        """list of packages required for installation"""
+-        return self._analyse_packages[1]
+-
+-    @property
+-    def _analyse_packages(self):
+-        """gather from configuration, names starting with * need
+-        to be installed explicitly as they are not on PyPI
+-        install_requires should be  dict, with keys 'any', 'py27' etc
+-        or a list (which is as if only 'any' was defined
+-
+-        ToDo: update with: pep508 conditional dependencies
+-        """
+-        if self._pkg[0] is None:
+-            self._pkg[0] = []
+-            self._pkg[1] = []
+-
+-        ir = self._pkg_data.get('install_requires')
+-        if ir is None:
+-            return self._pkg  # these will be both empty at this point
+-        if isinstance(ir, list):
+-            self._pkg[0] = ir
+-            return self._pkg
+-        # 'any' for all builds, 'py27' etc for specifics versions
+-        packages = ir.get('any', [])
+-        if isinstance(packages, string_type):
+-            packages = packages.split()  # assume white space separated string
+-        if self.nested:
+-            # parent dir is also a package, make sure it is installed (need its .pth file)
+-            parent_pkg = self.full_package_name.rsplit('.', 1)[0]
+-            if parent_pkg not in packages:
+-                packages.append(parent_pkg)
+-        implementation = platform.python_implementation()
+-        if implementation == 'CPython':
+-            pyver = 'py{0}{1}'.format(*sys.version_info)
+-        elif implementation == 'PyPy':
+-            pyver = 'pypy' if sys.version_info < (3,) else 'pypy3'
+-        elif implementation == 'Jython':
+-            pyver = 'jython'
+-        packages.extend(ir.get(pyver, []))
+-        for p in packages:
+-            # package name starting with * means use local source tree,  non-published
+-            # to PyPi or maybe not latest version on PyPI -> pre-install
+-            if p[0] == '*':
+-                p = p[1:]
+-                self._pkg[1].append(p)
+-            self._pkg[0].append(p)
+-        return self._pkg
+-
+-    @property
+-    def extras_require(self):
+-        """dict of conditions -> extra packages informaton required for installation
+-        as of setuptools 33 doing `package ; python_version<=2.7' in install_requires
+-        still doesn't work
+-
+-        https://www.python.org/dev/peps/pep-0508/
+-        https://wheel.readthedocs.io/en/latest/index.html#defining-conditional-dependencies
+-        https://hynek.me/articles/conditional-python-dependencies/
+-        """
+-        ep = self._pkg_data.get('extras_require')
+-        return ep
+-
+-    # @property
+-    # def data_files(self):
+-    #     df = self._pkg_data.get('data_files', [])
+-    #     if self.has_mit_lic():
+-    #         df.append('LICENSE')
+-    #     if not df:
+-    #         return None
+-    #     return [('.', df)]
+-
+-    @property
+-    def package_data(self):
+-        df = self._pkg_data.get('data_files', [])
+-        if self.has_mit_lic():
+-            # include the file
+-            df.append('LICENSE')
+-            # but don't install it
+-            exclude_files.append('LICENSE')
+-        if self._pkg_data.get('binary_only', False):
+-            exclude_files.append('__init__.py')
+-        debug('testing<<<<<')
+-        if 'Typing :: Typed' in self.classifiers:
+-            debug('appending')
+-            df.append('py.typed')
+-        pd = self._pkg_data.get('package_data', {})
+-        if df:
+-            pd[self.full_package_name] = df
+-        if sys.version_info < (3,):
+-            # python2 doesn't seem to like unicode package names as keys
+-            # maybe only when the packages themselves are non-unicode
+-            for k in pd:
+-                if isinstance(k, unicode):
+-                    pd[str(k)] = pd.pop(k)
+-            # for k in pd:
+-            #     pd[k] = [e.encode('utf-8') for e in pd[k]]  # de-unicode
+-        return pd
+-
+-    @property
+-    def packages(self):
+-        s = self.split
+-        # fixed this in package_data, the keys there must be non-unicode for py27
+-        # if sys.version_info < (3, 0):
+-        #     s = [x.encode('utf-8') for x in self.split]
+-        return s + self._pkg_data.get('extra_packages', [])
+-
+-    @property
+-    def python_requires(self):
+-        return self._pkg_data.get('python_requires', None)
+-
+-    @property
+-    def ext_modules(self):
+-        """
+-        Check if all modules specified in the value for 'ext_modules' can be build.
+-        That value (if not None) is a list of dicts with 'name', 'src', 'lib'
+-        Optional 'test' can be used to make sure trying to compile will work on the host
+-
+-        creates and return the external modules as Extensions, unless that
+-        is not necessary at all for the action (like --version)
+-
+-        test existence of compiler by using export CC=nonexistent; export CXX=nonexistent
+-        """
+-
+-        if hasattr(self, '_ext_modules'):
+-            return self._ext_modules
+-        if '--version' in sys.argv:
+-            return None
+-        if platform.python_implementation() == 'Jython':
+-            return None
+-        try:
+-            plat = sys.argv.index('--plat-name')
+-            if 'win' in sys.argv[plat + 1]:
+-                return None
+-        except ValueError:
+-            pass
+-        self._ext_modules = []
+-        no_test_compile = False
+-        if '--restructuredtext' in sys.argv:
+-            no_test_compile = True
+-        elif 'sdist' in sys.argv:
+-            no_test_compile = True
+-        if no_test_compile:
+-            for target in self._pkg_data.get('ext_modules', []):
+-                ext = Extension(
+-                    self.pn(target['name']),
+-                    sources=[self.pn(x) for x in target['src']],
+-                    libraries=[self.pn(x) for x in target.get('lib')],
+-                )
+-                self._ext_modules.append(ext)
+-            return self._ext_modules
+-
+-        print('sys.argv', sys.argv)
+-        import tempfile
+-        import shutil
+-        from textwrap import dedent
+-
+-        import distutils.sysconfig
+-        import distutils.ccompiler
+-        from distutils.errors import CompileError, LinkError
+-
+-        for target in self._pkg_data.get('ext_modules', []):  # list of dicts
+-            ext = Extension(
+-                self.pn(target['name']),
+-                sources=[self.pn(x) for x in target['src']],
+-                libraries=[self.pn(x) for x in target.get('lib')],
+-            )
+-            # debug('test1 in target', 'test' in target, target)
+-            if 'test' not in target:  # no test, just hope it works
+-                self._ext_modules.append(ext)
+-                continue
+-            if sys.version_info[:2] == (3, 4) and platform.system() == 'Windows':
+-                # this is giving problems on appveyor, so skip
+-                if 'FORCE_C_BUILD_TEST' not in os.environ:
+-                    self._ext_modules.append(ext)
+-                    continue
+-            # write a temporary .c file to compile
+-            c_code = dedent(target['test'])
+-            try:
+-                tmp_dir = tempfile.mkdtemp(prefix='tmp_ruamel_')
+-                bin_file_name = 'test' + self.pn(target['name'])
+-                print('test compiling', bin_file_name)
+-                file_name = os.path.join(tmp_dir, bin_file_name + '.c')
+-                with open(file_name, 'w') as fp:  # write source
+-                    fp.write(c_code)
+-                # and try to compile it
+-                compiler = distutils.ccompiler.new_compiler()
+-                assert isinstance(compiler, distutils.ccompiler.CCompiler)
+-                # do any platform specific initialisations
+-                distutils.sysconfig.customize_compiler(compiler)
+-                # make sure you can reach header files because compile does change dir
+-                compiler.add_include_dir(os.getcwd())
+-                if sys.version_info < (3,):
+-                    tmp_dir = tmp_dir.encode('utf-8')
+-                # used to be a different directory, not necessary
+-                compile_out_dir = tmp_dir
+-                try:
+-                    compiler.link_executable(
+-                        compiler.compile([file_name], output_dir=compile_out_dir),
+-                        bin_file_name,
+-                        output_dir=tmp_dir,
+-                        libraries=ext.libraries,
+-                    )
+-                except CompileError:
+-                    debug('compile error:', file_name)
+-                    print('compile error:', file_name)
+-                    continue
+-                except LinkError:
+-                    debug('link error', file_name)
+-                    print('link error', file_name)
+-                    continue
+-                self._ext_modules.append(ext)
+-            except Exception as e:  # NOQA
+-                debug('Exception:', e)
+-                print('Exception:', e)
+-                if sys.version_info[:2] == (3, 4) and platform.system() == 'Windows':
+-                    traceback.print_exc()
+-            finally:
+-                shutil.rmtree(tmp_dir)
+-        return self._ext_modules
+-
+-    @property
+-    def test_suite(self):
+-        return self._pkg_data.get('test_suite')
+-
+-    def wheel(self, kw, setup):
+-        """temporary add setup.cfg if creating a wheel to include LICENSE file
+-        https://bitbucket.org/pypa/wheel/issues/47
+-        """
+-        if 'bdist_wheel' not in sys.argv:
+-            return False
+-        file_name = 'setup.cfg'
+-        if os.path.exists(file_name):  # add it if not in there?
+-            return False
+-        with open(file_name, 'w') as fp:
+-            if os.path.exists('LICENSE'):
+-                fp.write('[metadata]\nlicense-file = LICENSE\n')
+-            else:
+-                print('\n\n>>>>>> LICENSE file not found <<<<<\n\n')
+-            if self._pkg_data.get('universal'):
+-                fp.write('[bdist_wheel]\nuniversal = 1\n')
+-        try:
+-            setup(**kw)
+-        except Exception:
+-            raise
+-        finally:
+-            os.remove(file_name)
+-        return True
+-
+-
+-# # call setup
+-def main():
+-    dump_kw = '--dump-kw'
+-    if dump_kw in sys.argv:
+-        import wheel
+-        import distutils
+-        import setuptools
+-
+-        print('python:    ', sys.version)
+-        print('setuptools:', setuptools.__version__)
+-        print('distutils: ', distutils.__version__)
+-        print('wheel:     ', wheel.__version__)
+-    nsp = NameSpacePackager(pkg_data)
+-    nsp.check()
+-    nsp.create_dirs()
+-    MySdist.nsp = nsp
+-    if pkg_data.get('tarfmt'):
+-        MySdist.tarfmt = pkg_data.get('tarfmt')
+-
+-    cmdclass = dict(install_lib=MyInstallLib, sdist=MySdist)
+-    if _bdist_wheel_available:
+-        MyBdistWheel.nsp = nsp
+-        cmdclass['bdist_wheel'] = MyBdistWheel
+-
+-    kw = dict(
+-        name=nsp.full_package_name,
+-        namespace_packages=nsp.namespace_packages,
+-        version=version_str,
+-        packages=nsp.packages,
+-        python_requires=nsp.python_requires,
+-        url=nsp.url,
+-        author=nsp.author,
+-        author_email=nsp.author_email,
+-        cmdclass=cmdclass,
+-        package_dir=nsp.package_dir,
+-        entry_points=nsp.entry_points(),
+-        description=nsp.description,
+-        install_requires=nsp.install_requires,
+-        extras_require=nsp.extras_require,  # available since setuptools 18.0 / 2015-06
+-        license=nsp.license,
+-        classifiers=nsp.classifiers,
+-        keywords=nsp.keywords,
+-        package_data=nsp.package_data,
+-        ext_modules=nsp.ext_modules,
+-        test_suite=nsp.test_suite,
+-    )
+-
+-    if '--version' not in sys.argv and ('--verbose' in sys.argv or dump_kw in sys.argv):
+-        for k in sorted(kw):
+-            v = kw[k]
+-            print('  "{0}": "{1}",'.format(k, v))
+-    # if '--record' in sys.argv:
+-    #     return
+-    if dump_kw in sys.argv:
+-        sys.argv.remove(dump_kw)
+-    try:
+-        with open('README.rst') as fp:
+-            kw['long_description'] = fp.read()
+-            kw['long_description_content_type'] = 'text/x-rst'
+-    except Exception:
+-        pass
+-
+-    if nsp.wheel(kw, setup):
+-        return
+-    for x in ['-c', 'egg_info', '--egg-base', 'pip-egg-info']:
+-        if x not in sys.argv:
+-            break
+-    else:
+-        # we're doing a tox setup install any starred package by searching up the source tree
+-        # until you match your/package/name for your.package.name
+-        for p in nsp.install_pre:
+-            import subprocess
+-
+-            # search other source
+-            setup_path = os.path.join(*p.split('.') + ['setup.py'])
+-            try_dir = os.path.dirname(sys.executable)
+-            while len(try_dir) > 1:
+-                full_path_setup_py = os.path.join(try_dir, setup_path)
+-                if os.path.exists(full_path_setup_py):
+-                    pip = sys.executable.replace('python', 'pip')
+-                    cmd = [pip, 'install', os.path.dirname(full_path_setup_py)]
+-                    # with open('/var/tmp/notice', 'a') as fp:
+-                    #     print('installing', cmd, file=fp)
+-                    subprocess.check_output(cmd)
+-                    break
+-                try_dir = os.path.dirname(try_dir)
+-    setup(**kw)
+-
+-
+-main()
+diff --git a/dynaconf/vendor_src/ruamel/yaml/timestamp.py b/dynaconf/vendor_src/ruamel/yaml/timestamp.py
+deleted file mode 100644
+index 374e4c0..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/timestamp.py
++++ /dev/null
+@@ -1,28 +0,0 @@
+-# coding: utf-8
+-
+-from __future__ import print_function, absolute_import, division, unicode_literals
+-
+-import datetime
+-import copy
+-
+-# ToDo: at least on PY3 you could probably attach the tzinfo correctly to the object
+-#       a more complete datetime might be used by safe loading as well
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List  # NOQA
+-
+-
+-class TimeStamp(datetime.datetime):
+-    def __init__(self, *args, **kw):
+-        # type: (Any, Any) -> None
+-        self._yaml = dict(t=False, tz=None, delta=0)  # type: Dict[Any, Any]
+-
+-    def __new__(cls, *args, **kw):  # datetime is immutable
+-        # type: (Any, Any) -> Any
+-        return datetime.datetime.__new__(cls, *args, **kw)  # type: ignore
+-
+-    def __deepcopy__(self, memo):
+-        # type: (Any) -> Any
+-        ts = TimeStamp(self.year, self.month, self.day, self.hour, self.minute, self.second)
+-        ts._yaml = copy.deepcopy(self._yaml)
+-        return ts
+diff --git a/dynaconf/vendor_src/ruamel/yaml/tokens.py b/dynaconf/vendor_src/ruamel/yaml/tokens.py
+deleted file mode 100644
+index 5f5a663..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/tokens.py
++++ /dev/null
+@@ -1,286 +0,0 @@
+-# # header
+-# coding: utf-8
+-
+-from __future__ import unicode_literals
+-
+-if False:  # MYPY
+-    from typing import Text, Any, Dict, Optional, List  # NOQA
+-    from .error import StreamMark  # NOQA
+-
+-SHOWLINES = True
+-
+-
+-class Token(object):
+-    __slots__ = 'start_mark', 'end_mark', '_comment'
+-
+-    def __init__(self, start_mark, end_mark):
+-        # type: (StreamMark, StreamMark) -> None
+-        self.start_mark = start_mark
+-        self.end_mark = end_mark
+-
+-    def __repr__(self):
+-        # type: () -> Any
+-        # attributes = [key for key in self.__slots__ if not key.endswith('_mark') and
+-        #               hasattr('self', key)]
+-        attributes = [key for key in self.__slots__ if not key.endswith('_mark')]
+-        attributes.sort()
+-        arguments = ', '.join(['%s=%r' % (key, getattr(self, key)) for key in attributes])
+-        if SHOWLINES:
+-            try:
+-                arguments += ', line: ' + str(self.start_mark.line)
+-            except:  # NOQA
+-                pass
+-        try:
+-            arguments += ', comment: ' + str(self._comment)
+-        except:  # NOQA
+-            pass
+-        return '{}({})'.format(self.__class__.__name__, arguments)
+-
+-    def add_post_comment(self, comment):
+-        # type: (Any) -> None
+-        if not hasattr(self, '_comment'):
+-            self._comment = [None, None]
+-        self._comment[0] = comment
+-
+-    def add_pre_comments(self, comments):
+-        # type: (Any) -> None
+-        if not hasattr(self, '_comment'):
+-            self._comment = [None, None]
+-        assert self._comment[1] is None
+-        self._comment[1] = comments
+-
+-    def get_comment(self):
+-        # type: () -> Any
+-        return getattr(self, '_comment', None)
+-
+-    @property
+-    def comment(self):
+-        # type: () -> Any
+-        return getattr(self, '_comment', None)
+-
+-    def move_comment(self, target, empty=False):
+-        # type: (Any, bool) -> Any
+-        """move a comment from this token to target (normally next token)
+-        used to combine e.g. comments before a BlockEntryToken to the
+-        ScalarToken that follows it
+-        empty is a special for empty values -> comment after key
+-        """
+-        c = self.comment
+-        if c is None:
+-            return
+-        # don't push beyond last element
+-        if isinstance(target, (StreamEndToken, DocumentStartToken)):
+-            return
+-        delattr(self, '_comment')
+-        tc = target.comment
+-        if not tc:  # target comment, just insert
+-            # special for empty value in key: value issue 25
+-            if empty:
+-                c = [c[0], c[1], None, None, c[0]]
+-            target._comment = c
+-            # nprint('mco2:', self, target, target.comment, empty)
+-            return self
+-        if c[0] and tc[0] or c[1] and tc[1]:
+-            raise NotImplementedError('overlap in comment %r %r' % (c, tc))
+-        if c[0]:
+-            tc[0] = c[0]
+-        if c[1]:
+-            tc[1] = c[1]
+-        return self
+-
+-    def split_comment(self):
+-        # type: () -> Any
+-        """ split the post part of a comment, and return it
+-        as comment to be added. Delete second part if [None, None]
+-         abc:  # this goes to sequence
+-           # this goes to first element
+-           - first element
+-        """
+-        comment = self.comment
+-        if comment is None or comment[0] is None:
+-            return None  # nothing to do
+-        ret_val = [comment[0], None]
+-        if comment[1] is None:
+-            delattr(self, '_comment')
+-        return ret_val
+-
+-
+-# class BOMToken(Token):
+-#     id = '<byte order mark>'
+-
+-
+-class DirectiveToken(Token):
+-    __slots__ = 'name', 'value'
+-    id = '<directive>'
+-
+-    def __init__(self, name, value, start_mark, end_mark):
+-        # type: (Any, Any, Any, Any) -> None
+-        Token.__init__(self, start_mark, end_mark)
+-        self.name = name
+-        self.value = value
+-
+-
+-class DocumentStartToken(Token):
+-    __slots__ = ()
+-    id = '<document start>'
+-
+-
+-class DocumentEndToken(Token):
+-    __slots__ = ()
+-    id = '<document end>'
+-
+-
+-class StreamStartToken(Token):
+-    __slots__ = ('encoding',)
+-    id = '<stream start>'
+-
+-    def __init__(self, start_mark=None, end_mark=None, encoding=None):
+-        # type: (Any, Any, Any) -> None
+-        Token.__init__(self, start_mark, end_mark)
+-        self.encoding = encoding
+-
+-
+-class StreamEndToken(Token):
+-    __slots__ = ()
+-    id = '<stream end>'
+-
+-
+-class BlockSequenceStartToken(Token):
+-    __slots__ = ()
+-    id = '<block sequence start>'
+-
+-
+-class BlockMappingStartToken(Token):
+-    __slots__ = ()
+-    id = '<block mapping start>'
+-
+-
+-class BlockEndToken(Token):
+-    __slots__ = ()
+-    id = '<block end>'
+-
+-
+-class FlowSequenceStartToken(Token):
+-    __slots__ = ()
+-    id = '['
+-
+-
+-class FlowMappingStartToken(Token):
+-    __slots__ = ()
+-    id = '{'
+-
+-
+-class FlowSequenceEndToken(Token):
+-    __slots__ = ()
+-    id = ']'
+-
+-
+-class FlowMappingEndToken(Token):
+-    __slots__ = ()
+-    id = '}'
+-
+-
+-class KeyToken(Token):
+-    __slots__ = ()
+-    id = '?'
+-
+-    # def x__repr__(self):
+-    #     return 'KeyToken({})'.format(
+-    #         self.start_mark.buffer[self.start_mark.index:].split(None, 1)[0])
+-
+-
+-class ValueToken(Token):
+-    __slots__ = ()
+-    id = ':'
+-
+-
+-class BlockEntryToken(Token):
+-    __slots__ = ()
+-    id = '-'
+-
+-
+-class FlowEntryToken(Token):
+-    __slots__ = ()
+-    id = ','
+-
+-
+-class AliasToken(Token):
+-    __slots__ = ('value',)
+-    id = '<alias>'
+-
+-    def __init__(self, value, start_mark, end_mark):
+-        # type: (Any, Any, Any) -> None
+-        Token.__init__(self, start_mark, end_mark)
+-        self.value = value
+-
+-
+-class AnchorToken(Token):
+-    __slots__ = ('value',)
+-    id = '<anchor>'
+-
+-    def __init__(self, value, start_mark, end_mark):
+-        # type: (Any, Any, Any) -> None
+-        Token.__init__(self, start_mark, end_mark)
+-        self.value = value
+-
+-
+-class TagToken(Token):
+-    __slots__ = ('value',)
+-    id = '<tag>'
+-
+-    def __init__(self, value, start_mark, end_mark):
+-        # type: (Any, Any, Any) -> None
+-        Token.__init__(self, start_mark, end_mark)
+-        self.value = value
+-
+-
+-class ScalarToken(Token):
+-    __slots__ = 'value', 'plain', 'style'
+-    id = '<scalar>'
+-
+-    def __init__(self, value, plain, start_mark, end_mark, style=None):
+-        # type: (Any, Any, Any, Any, Any) -> None
+-        Token.__init__(self, start_mark, end_mark)
+-        self.value = value
+-        self.plain = plain
+-        self.style = style
+-
+-
+-class CommentToken(Token):
+-    __slots__ = 'value', 'pre_done'
+-    id = '<comment>'
+-
+-    def __init__(self, value, start_mark, end_mark):
+-        # type: (Any, Any, Any) -> None
+-        Token.__init__(self, start_mark, end_mark)
+-        self.value = value
+-
+-    def reset(self):
+-        # type: () -> None
+-        if hasattr(self, 'pre_done'):
+-            delattr(self, 'pre_done')
+-
+-    def __repr__(self):
+-        # type: () -> Any
+-        v = '{!r}'.format(self.value)
+-        if SHOWLINES:
+-            try:
+-                v += ', line: ' + str(self.start_mark.line)
+-                v += ', col: ' + str(self.start_mark.column)
+-            except:  # NOQA
+-                pass
+-        return 'CommentToken({})'.format(v)
+-
+-    def __eq__(self, other):
+-        # type: (Any) -> bool
+-        if self.start_mark != other.start_mark:
+-            return False
+-        if self.end_mark != other.end_mark:
+-            return False
+-        if self.value != other.value:
+-            return False
+-        return True
+-
+-    def __ne__(self, other):
+-        # type: (Any) -> bool
+-        return not self.__eq__(other)
+diff --git a/dynaconf/vendor_src/ruamel/yaml/util.py b/dynaconf/vendor_src/ruamel/yaml/util.py
+deleted file mode 100644
+index 3eb7d76..0000000
+--- a/dynaconf/vendor_src/ruamel/yaml/util.py
++++ /dev/null
+@@ -1,190 +0,0 @@
+-# coding: utf-8
+-
+-"""
+-some helper functions that might be generally useful
+-"""
+-
+-from __future__ import absolute_import, print_function
+-
+-from functools import partial
+-import re
+-
+-from .compat import text_type, binary_type
+-
+-if False:  # MYPY
+-    from typing import Any, Dict, Optional, List, Text  # NOQA
+-    from .compat import StreamTextType  # NOQA
+-
+-
+-class LazyEval(object):
+-    """
+-    Lightweight wrapper around lazily evaluated func(*args, **kwargs).
+-
+-    func is only evaluated when any attribute of its return value is accessed.
+-    Every attribute access is passed through to the wrapped value.
+-    (This only excludes special cases like method-wrappers, e.g., __hash__.)
+-    The sole additional attribute is the lazy_self function which holds the
+-    return value (or, prior to evaluation, func and arguments), in its closure.
+-    """
+-
+-    def __init__(self, func, *args, **kwargs):
+-        # type: (Any, Any, Any) -> None
+-        def lazy_self():
+-            # type: () -> Any
+-            return_value = func(*args, **kwargs)
+-            object.__setattr__(self, 'lazy_self', lambda: return_value)
+-            return return_value
+-
+-        object.__setattr__(self, 'lazy_self', lazy_self)
+-
+-    def __getattribute__(self, name):
+-        # type: (Any) -> Any
+-        lazy_self = object.__getattribute__(self, 'lazy_self')
+-        if name == 'lazy_self':
+-            return lazy_self
+-        return getattr(lazy_self(), name)
+-
+-    def __setattr__(self, name, value):
+-        # type: (Any, Any) -> None
+-        setattr(self.lazy_self(), name, value)
+-
+-
+-RegExp = partial(LazyEval, re.compile)
+-
+-
+-# originally as comment
+-# https://github.com/pre-commit/pre-commit/pull/211#issuecomment-186466605
+-# if you use this in your code, I suggest adding a test in your test suite
+-# that check this routines output against a known piece of your YAML
+-# before upgrades to this code break your round-tripped YAML
+-def load_yaml_guess_indent(stream, **kw):
+-    # type: (StreamTextType, Any) -> Any
+-    """guess the indent and block sequence indent of yaml stream/string
+-
+-    returns round_trip_loaded stream, indent level, block sequence indent
+-    - block sequence indent is the number of spaces before a dash relative to previous indent
+-    - if there are no block sequences, indent is taken from nested mappings, block sequence
+-      indent is unset (None) in that case
+-    """
+-    from .main import round_trip_load
+-
+-    # load a yaml file guess the indentation, if you use TABs ...
+-    def leading_spaces(l):
+-        # type: (Any) -> int
+-        idx = 0
+-        while idx < len(l) and l[idx] == ' ':
+-            idx += 1
+-        return idx
+-
+-    if isinstance(stream, text_type):
+-        yaml_str = stream  # type: Any
+-    elif isinstance(stream, binary_type):
+-        # most likely, but the Reader checks BOM for this
+-        yaml_str = stream.decode('utf-8')
+-    else:
+-        yaml_str = stream.read()
+-    map_indent = None
+-    indent = None  # default if not found for some reason
+-    block_seq_indent = None
+-    prev_line_key_only = None
+-    key_indent = 0
+-    for line in yaml_str.splitlines():
+-        rline = line.rstrip()
+-        lline = rline.lstrip()
+-        if lline.startswith('- '):
+-            l_s = leading_spaces(line)
+-            block_seq_indent = l_s - key_indent
+-            idx = l_s + 1
+-            while line[idx] == ' ':  # this will end as we rstripped
+-                idx += 1
+-            if line[idx] == '#':  # comment after -
+-                continue
+-            indent = idx - key_indent
+-            break
+-        if map_indent is None and prev_line_key_only is not None and rline:
+-            idx = 0
+-            while line[idx] in ' -':
+-                idx += 1
+-            if idx > prev_line_key_only:
+-                map_indent = idx - prev_line_key_only
+-        if rline.endswith(':'):
+-            key_indent = leading_spaces(line)
+-            idx = 0
+-            while line[idx] == ' ':  # this will end on ':'
+-                idx += 1
+-            prev_line_key_only = idx
+-            continue
+-        prev_line_key_only = None
+-    if indent is None and map_indent is not None:
+-        indent = map_indent
+-    return round_trip_load(yaml_str, **kw), indent, block_seq_indent
+-
+-
+-def configobj_walker(cfg):
+-    # type: (Any) -> Any
+-    """
+-    walks over a ConfigObj (INI file with comments) generating
+-    corresponding YAML output (including comments
+-    """
+-    from configobj import ConfigObj  # type: ignore
+-
+-    assert isinstance(cfg, ConfigObj)
+-    for c in cfg.initial_comment:
+-        if c.strip():
+-            yield c
+-    for s in _walk_section(cfg):
+-        if s.strip():
+-            yield s
+-    for c in cfg.final_comment:
+-        if c.strip():
+-            yield c
+-
+-
+-def _walk_section(s, level=0):
+-    # type: (Any, int) -> Any
+-    from configobj import Section
+-
+-    assert isinstance(s, Section)
+-    indent = u'  ' * level
+-    for name in s.scalars:
+-        for c in s.comments[name]:
+-            yield indent + c.strip()
+-        x = s[name]
+-        if u'\n' in x:
+-            i = indent + u'  '
+-            x = u'|\n' + i + x.strip().replace(u'\n', u'\n' + i)
+-        elif ':' in x:
+-            x = u"'" + x.replace(u"'", u"''") + u"'"
+-        line = u'{0}{1}: {2}'.format(indent, name, x)
+-        c = s.inline_comments[name]
+-        if c:
+-            line += u' ' + c
+-        yield line
+-    for name in s.sections:
+-        for c in s.comments[name]:
+-            yield indent + c.strip()
+-        line = u'{0}{1}:'.format(indent, name)
+-        c = s.inline_comments[name]
+-        if c:
+-            line += u' ' + c
+-        yield line
+-        for val in _walk_section(s[name], level=level + 1):
+-            yield val
+-
+-
+-# def config_obj_2_rt_yaml(cfg):
+-#     from .comments import CommentedMap, CommentedSeq
+-#     from configobj import ConfigObj
+-#     assert isinstance(cfg, ConfigObj)
+-#     #for c in cfg.initial_comment:
+-#     #    if c.strip():
+-#     #        pass
+-#     cm = CommentedMap()
+-#     for name in s.sections:
+-#         cm[name] = d = CommentedMap()
+-#
+-#
+-#     #for c in cfg.final_comment:
+-#     #    if c.strip():
+-#     #        yield c
+-#     return cm
+diff --git a/dynaconf/vendor_src/toml/README.md b/dynaconf/vendor_src/toml/README.md
+deleted file mode 100644
+index cbe16fd..0000000
+--- a/dynaconf/vendor_src/toml/README.md
++++ /dev/null
+@@ -1,5 +0,0 @@
+-## python-toml
+-
+-Vendored dep taken from: https://github.com/uiri/toml
+-Licensed under BSD: https://github.com/uiri/toml/blob/master/LICENSE
+-Current version: 0.10.8
+diff --git a/dynaconf/vendor_src/toml/__init__.py b/dynaconf/vendor_src/toml/__init__.py
+deleted file mode 100644
+index 338d74c..0000000
+--- a/dynaconf/vendor_src/toml/__init__.py
++++ /dev/null
+@@ -1,25 +0,0 @@
+-"""Python module which parses and emits TOML.
+-
+-Released under the MIT license.
+-"""
+-
+-from . import encoder
+-from . import decoder
+-
+-__version__ = "0.10.1"
+-_spec_ = "0.5.0"
+-
+-load = decoder.load
+-loads = decoder.loads
+-TomlDecoder = decoder.TomlDecoder
+-TomlDecodeError = decoder.TomlDecodeError
+-TomlPreserveCommentDecoder = decoder.TomlPreserveCommentDecoder
+-
+-dump = encoder.dump
+-dumps = encoder.dumps
+-TomlEncoder = encoder.TomlEncoder
+-TomlArraySeparatorEncoder = encoder.TomlArraySeparatorEncoder
+-TomlPreserveInlineDictEncoder = encoder.TomlPreserveInlineDictEncoder
+-TomlNumpyEncoder = encoder.TomlNumpyEncoder
+-TomlPreserveCommentEncoder = encoder.TomlPreserveCommentEncoder
+-TomlPathlibEncoder = encoder.TomlPathlibEncoder
+diff --git a/dynaconf/vendor_src/toml/decoder.py b/dynaconf/vendor_src/toml/decoder.py
+deleted file mode 100644
+index 9229733..0000000
+--- a/dynaconf/vendor_src/toml/decoder.py
++++ /dev/null
+@@ -1,1052 +0,0 @@
+-import datetime
+-import io
+-from os import linesep
+-import re
+-import sys
+-
+-from .tz import TomlTz
+-
+-if sys.version_info < (3,):
+-    _range = xrange  # noqa: F821
+-else:
+-    unicode = str
+-    _range = range
+-    basestring = str
+-    unichr = chr
+-
+-
+-def _detect_pathlib_path(p):
+-    if (3, 4) <= sys.version_info:
+-        import pathlib
+-        if isinstance(p, pathlib.PurePath):
+-            return True
+-    return False
+-
+-
+-def _ispath(p):
+-    if isinstance(p, (bytes, basestring)):
+-        return True
+-    return _detect_pathlib_path(p)
+-
+-
+-def _getpath(p):
+-    if (3, 6) <= sys.version_info:
+-        import os
+-        return os.fspath(p)
+-    if _detect_pathlib_path(p):
+-        return str(p)
+-    return p
+-
+-
+-try:
+-    FNFError = FileNotFoundError
+-except NameError:
+-    FNFError = IOError
+-
+-
+-TIME_RE = re.compile(r"([0-9]{2}):([0-9]{2}):([0-9]{2})(\.([0-9]{3,6}))?")
+-
+-
+-class TomlDecodeError(ValueError):
+-    """Base toml Exception / Error."""
+-
+-    def __init__(self, msg, doc, pos):
+-        lineno = doc.count('\n', 0, pos) + 1
+-        colno = pos - doc.rfind('\n', 0, pos)
+-        emsg = '{} (line {} column {} char {})'.format(msg, lineno, colno, pos)
+-        ValueError.__init__(self, emsg)
+-        self.msg = msg
+-        self.doc = doc
+-        self.pos = pos
+-        self.lineno = lineno
+-        self.colno = colno
+-
+-
+-# Matches a TOML number, which allows underscores for readability
+-_number_with_underscores = re.compile('([0-9])(_([0-9]))*')
+-
+-
+-class CommentValue(object):
+-    def __init__(self, val, comment, beginline, _dict):
+-        self.val = val
+-        separator = "\n" if beginline else " "
+-        self.comment = separator + comment
+-        self._dict = _dict
+-
+-    def __getitem__(self, key):
+-        return self.val[key]
+-
+-    def __setitem__(self, key, value):
+-        self.val[key] = value
+-
+-    def dump(self, dump_value_func):
+-        retstr = dump_value_func(self.val)
+-        if isinstance(self.val, self._dict):
+-            return self.comment + "\n" + unicode(retstr)
+-        else:
+-            return unicode(retstr) + self.comment
+-
+-
+-def _strictly_valid_num(n):
+-    n = n.strip()
+-    if not n:
+-        return False
+-    if n[0] == '_':
+-        return False
+-    if n[-1] == '_':
+-        return False
+-    if "_." in n or "._" in n:
+-        return False
+-    if len(n) == 1:
+-        return True
+-    if n[0] == '0' and n[1] not in ['.', 'o', 'b', 'x']:
+-        return False
+-    if n[0] == '+' or n[0] == '-':
+-        n = n[1:]
+-        if len(n) > 1 and n[0] == '0' and n[1] != '.':
+-            return False
+-    if '__' in n:
+-        return False
+-    return True
+-
+-
+-def load(f, _dict=dict, decoder=None):
+-    """Parses named file or files as toml and returns a dictionary
+-
+-    Args:
+-        f: Path to the file to open, array of files to read into single dict
+-           or a file descriptor
+-        _dict: (optional) Specifies the class of the returned toml dictionary
+-        decoder: The decoder to use
+-
+-    Returns:
+-        Parsed toml file represented as a dictionary
+-
+-    Raises:
+-        TypeError -- When f is invalid type
+-        TomlDecodeError: Error while decoding toml
+-        IOError / FileNotFoundError -- When an array with no valid (existing)
+-        (Python 2 / Python 3)          file paths is passed
+-    """
+-
+-    if _ispath(f):
+-        with io.open(_getpath(f), encoding='utf-8') as ffile:
+-            return loads(ffile.read(), _dict, decoder)
+-    elif isinstance(f, list):
+-        from os import path as op
+-        from warnings import warn
+-        if not [path for path in f if op.exists(path)]:
+-            error_msg = "Load expects a list to contain filenames only."
+-            error_msg += linesep
+-            error_msg += ("The list needs to contain the path of at least one "
+-                          "existing file.")
+-            raise FNFError(error_msg)
+-        if decoder is None:
+-            decoder = TomlDecoder(_dict)
+-        d = decoder.get_empty_table()
+-        for l in f:  # noqa: E741
+-            if op.exists(l):
+-                d.update(load(l, _dict, decoder))
+-            else:
+-                warn("Non-existent filename in list with at least one valid "
+-                     "filename")
+-        return d
+-    else:
+-        try:
+-            return loads(f.read(), _dict, decoder)
+-        except AttributeError:
+-            raise TypeError("You can only load a file descriptor, filename or "
+-                            "list")
+-
+-
+-_groupname_re = re.compile(r'^[A-Za-z0-9_-]+$')
+-
+-
+-def loads(s, _dict=dict, decoder=None):
+-    """Parses string as toml
+-
+-    Args:
+-        s: String to be parsed
+-        _dict: (optional) Specifies the class of the returned toml dictionary
+-
+-    Returns:
+-        Parsed toml file represented as a dictionary
+-
+-    Raises:
+-        TypeError: When a non-string is passed
+-        TomlDecodeError: Error while decoding toml
+-    """
+-
+-    implicitgroups = []
+-    if decoder is None:
+-        decoder = TomlDecoder(_dict)
+-    retval = decoder.get_empty_table()
+-    currentlevel = retval
+-    if not isinstance(s, basestring):
+-        raise TypeError("Expecting something like a string")
+-
+-    if not isinstance(s, unicode):
+-        s = s.decode('utf8')
+-
+-    original = s
+-    sl = list(s)
+-    openarr = 0
+-    openstring = False
+-    openstrchar = ""
+-    multilinestr = False
+-    arrayoftables = False
+-    beginline = True
+-    keygroup = False
+-    dottedkey = False
+-    keyname = 0
+-    key = ''
+-    prev_key = ''
+-    line_no = 1
+-
+-    for i, item in enumerate(sl):
+-        if item == '\r' and sl[i + 1] == '\n':
+-            sl[i] = ' '
+-            continue
+-        if keyname:
+-            key += item
+-            if item == '\n':
+-                raise TomlDecodeError("Key name found without value."
+-                                      " Reached end of line.", original, i)
+-            if openstring:
+-                if item == openstrchar:
+-                    oddbackslash = False
+-                    k = 1
+-                    while i >= k and sl[i - k] == '\\':
+-                        oddbackslash = not oddbackslash
+-                        k += 1
+-                    if not oddbackslash:
+-                        keyname = 2
+-                        openstring = False
+-                        openstrchar = ""
+-                continue
+-            elif keyname == 1:
+-                if item.isspace():
+-                    keyname = 2
+-                    continue
+-                elif item == '.':
+-                    dottedkey = True
+-                    continue
+-                elif item.isalnum() or item == '_' or item == '-':
+-                    continue
+-                elif (dottedkey and sl[i - 1] == '.' and
+-                      (item == '"' or item == "'")):
+-                    openstring = True
+-                    openstrchar = item
+-                    continue
+-            elif keyname == 2:
+-                if item.isspace():
+-                    if dottedkey:
+-                        nextitem = sl[i + 1]
+-                        if not nextitem.isspace() and nextitem != '.':
+-                            keyname = 1
+-                    continue
+-                if item == '.':
+-                    dottedkey = True
+-                    nextitem = sl[i + 1]
+-                    if not nextitem.isspace() and nextitem != '.':
+-                        keyname = 1
+-                    continue
+-            if item == '=':
+-                keyname = 0
+-                prev_key = key[:-1].rstrip()
+-                key = ''
+-                dottedkey = False
+-            else:
+-                raise TomlDecodeError("Found invalid character in key name: '" +
+-                                      item + "'. Try quoting the key name.",
+-                                      original, i)
+-        if item == "'" and openstrchar != '"':
+-            k = 1
+-            try:
+-                while sl[i - k] == "'":
+-                    k += 1
+-                    if k == 3:
+-                        break
+-            except IndexError:
+-                pass
+-            if k == 3:
+-                multilinestr = not multilinestr
+-                openstring = multilinestr
+-            else:
+-                openstring = not openstring
+-            if openstring:
+-                openstrchar = "'"
+-            else:
+-                openstrchar = ""
+-        if item == '"' and openstrchar != "'":
+-            oddbackslash = False
+-            k = 1
+-            tripquote = False
+-            try:
+-                while sl[i - k] == '"':
+-                    k += 1
+-                    if k == 3:
+-                        tripquote = True
+-                        break
+-                if k == 1 or (k == 3 and tripquote):
+-                    while sl[i - k] == '\\':
+-                        oddbackslash = not oddbackslash
+-                        k += 1
+-            except IndexError:
+-                pass
+-            if not oddbackslash:
+-                if tripquote:
+-                    multilinestr = not multilinestr
+-                    openstring = multilinestr
+-                else:
+-                    openstring = not openstring
+-            if openstring:
+-                openstrchar = '"'
+-            else:
+-                openstrchar = ""
+-        if item == '#' and (not openstring and not keygroup and
+-                            not arrayoftables):
+-            j = i
+-            comment = ""
+-            try:
+-                while sl[j] != '\n':
+-                    comment += s[j]
+-                    sl[j] = ' '
+-                    j += 1
+-            except IndexError:
+-                break
+-            if not openarr:
+-                decoder.preserve_comment(line_no, prev_key, comment, beginline)
+-        if item == '[' and (not openstring and not keygroup and
+-                            not arrayoftables):
+-            if beginline:
+-                if len(sl) > i + 1 and sl[i + 1] == '[':
+-                    arrayoftables = True
+-                else:
+-                    keygroup = True
+-            else:
+-                openarr += 1
+-        if item == ']' and not openstring:
+-            if keygroup:
+-                keygroup = False
+-            elif arrayoftables:
+-                if sl[i - 1] == ']':
+-                    arrayoftables = False
+-            else:
+-                openarr -= 1
+-        if item == '\n':
+-            if openstring or multilinestr:
+-                if not multilinestr:
+-                    raise TomlDecodeError("Unbalanced quotes", original, i)
+-                if ((sl[i - 1] == "'" or sl[i - 1] == '"') and (
+-                        sl[i - 2] == sl[i - 1])):
+-                    sl[i] = sl[i - 1]
+-                    if sl[i - 3] == sl[i - 1]:
+-                        sl[i - 3] = ' '
+-            elif openarr:
+-                sl[i] = ' '
+-            else:
+-                beginline = True
+-            line_no += 1
+-        elif beginline and sl[i] != ' ' and sl[i] != '\t':
+-            beginline = False
+-            if not keygroup and not arrayoftables:
+-                if sl[i] == '=':
+-                    raise TomlDecodeError("Found empty keyname. ", original, i)
+-                keyname = 1
+-                key += item
+-    if keyname:
+-        raise TomlDecodeError("Key name found without value."
+-                              " Reached end of file.", original, len(s))
+-    if openstring:  # reached EOF and have an unterminated string
+-        raise TomlDecodeError("Unterminated string found."
+-                              " Reached end of file.", original, len(s))
+-    s = ''.join(sl)
+-    s = s.split('\n')
+-    multikey = None
+-    multilinestr = ""
+-    multibackslash = False
+-    pos = 0
+-    for idx, line in enumerate(s):
+-        if idx > 0:
+-            pos += len(s[idx - 1]) + 1
+-
+-        decoder.embed_comments(idx, currentlevel)
+-
+-        if not multilinestr or multibackslash or '\n' not in multilinestr:
+-            line = line.strip()
+-        if line == "" and (not multikey or multibackslash):
+-            continue
+-        if multikey:
+-            if multibackslash:
+-                multilinestr += line
+-            else:
+-                multilinestr += line
+-            multibackslash = False
+-            closed = False
+-            if multilinestr[0] == '[':
+-                closed = line[-1] == ']'
+-            elif len(line) > 2:
+-                closed = (line[-1] == multilinestr[0] and
+-                          line[-2] == multilinestr[0] and
+-                          line[-3] == multilinestr[0])
+-            if closed:
+-                try:
+-                    value, vtype = decoder.load_value(multilinestr)
+-                except ValueError as err:
+-                    raise TomlDecodeError(str(err), original, pos)
+-                currentlevel[multikey] = value
+-                multikey = None
+-                multilinestr = ""
+-            else:
+-                k = len(multilinestr) - 1
+-                while k > -1 and multilinestr[k] == '\\':
+-                    multibackslash = not multibackslash
+-                    k -= 1
+-                if multibackslash:
+-                    multilinestr = multilinestr[:-1]
+-                else:
+-                    multilinestr += "\n"
+-            continue
+-        if line[0] == '[':
+-            arrayoftables = False
+-            if len(line) == 1:
+-                raise TomlDecodeError("Opening key group bracket on line by "
+-                                      "itself.", original, pos)
+-            if line[1] == '[':
+-                arrayoftables = True
+-                line = line[2:]
+-                splitstr = ']]'
+-            else:
+-                line = line[1:]
+-                splitstr = ']'
+-            i = 1
+-            quotesplits = decoder._get_split_on_quotes(line)
+-            quoted = False
+-            for quotesplit in quotesplits:
+-                if not quoted and splitstr in quotesplit:
+-                    break
+-                i += quotesplit.count(splitstr)
+-                quoted = not quoted
+-            line = line.split(splitstr, i)
+-            if len(line) < i + 1 or line[-1].strip() != "":
+-                raise TomlDecodeError("Key group not on a line by itself.",
+-                                      original, pos)
+-            groups = splitstr.join(line[:-1]).split('.')
+-            i = 0
+-            while i < len(groups):
+-                groups[i] = groups[i].strip()
+-                if len(groups[i]) > 0 and (groups[i][0] == '"' or
+-                                           groups[i][0] == "'"):
+-                    groupstr = groups[i]
+-                    j = i + 1
+-                    while not groupstr[0] == groupstr[-1]:
+-                        j += 1
+-                        if j > len(groups) + 2:
+-                            raise TomlDecodeError("Invalid group name '" +
+-                                                  groupstr + "' Something " +
+-                                                  "went wrong.", original, pos)
+-                        groupstr = '.'.join(groups[i:j]).strip()
+-                    groups[i] = groupstr[1:-1]
+-                    groups[i + 1:j] = []
+-                else:
+-                    if not _groupname_re.match(groups[i]):
+-                        raise TomlDecodeError("Invalid group name '" +
+-                                              groups[i] + "'. Try quoting it.",
+-                                              original, pos)
+-                i += 1
+-            currentlevel = retval
+-            for i in _range(len(groups)):
+-                group = groups[i]
+-                if group == "":
+-                    raise TomlDecodeError("Can't have a keygroup with an empty "
+-                                          "name", original, pos)
+-                try:
+-                    currentlevel[group]
+-                    if i == len(groups) - 1:
+-                        if group in implicitgroups:
+-                            implicitgroups.remove(group)
+-                            if arrayoftables:
+-                                raise TomlDecodeError("An implicitly defined "
+-                                                      "table can't be an array",
+-                                                      original, pos)
+-                        elif arrayoftables:
+-                            currentlevel[group].append(decoder.get_empty_table()
+-                                                       )
+-                        else:
+-                            raise TomlDecodeError("What? " + group +
+-                                                  " already exists?" +
+-                                                  str(currentlevel),
+-                                                  original, pos)
+-                except TypeError:
+-                    currentlevel = currentlevel[-1]
+-                    if group not in currentlevel:
+-                        currentlevel[group] = decoder.get_empty_table()
+-                        if i == len(groups) - 1 and arrayoftables:
+-                            currentlevel[group] = [decoder.get_empty_table()]
+-                except KeyError:
+-                    if i != len(groups) - 1:
+-                        implicitgroups.append(group)
+-                    currentlevel[group] = decoder.get_empty_table()
+-                    if i == len(groups) - 1 and arrayoftables:
+-                        currentlevel[group] = [decoder.get_empty_table()]
+-                currentlevel = currentlevel[group]
+-                if arrayoftables:
+-                    try:
+-                        currentlevel = currentlevel[-1]
+-                    except KeyError:
+-                        pass
+-        elif line[0] == "{":
+-            if line[-1] != "}":
+-                raise TomlDecodeError("Line breaks are not allowed in inline"
+-                                      "objects", original, pos)
+-            try:
+-                decoder.load_inline_object(line, currentlevel, multikey,
+-                                           multibackslash)
+-            except ValueError as err:
+-                raise TomlDecodeError(str(err), original, pos)
+-        elif "=" in line:
+-            try:
+-                ret = decoder.load_line(line, currentlevel, multikey,
+-                                        multibackslash)
+-            except ValueError as err:
+-                raise TomlDecodeError(str(err), original, pos)
+-            if ret is not None:
+-                multikey, multilinestr, multibackslash = ret
+-    return retval
+-
+-
+-def _load_date(val):
+-    microsecond = 0
+-    tz = None
+-    try:
+-        if len(val) > 19:
+-            if val[19] == '.':
+-                if val[-1].upper() == 'Z':
+-                    subsecondval = val[20:-1]
+-                    tzval = "Z"
+-                else:
+-                    subsecondvalandtz = val[20:]
+-                    if '+' in subsecondvalandtz:
+-                        splitpoint = subsecondvalandtz.index('+')
+-                        subsecondval = subsecondvalandtz[:splitpoint]
+-                        tzval = subsecondvalandtz[splitpoint:]
+-                    elif '-' in subsecondvalandtz:
+-                        splitpoint = subsecondvalandtz.index('-')
+-                        subsecondval = subsecondvalandtz[:splitpoint]
+-                        tzval = subsecondvalandtz[splitpoint:]
+-                    else:
+-                        tzval = None
+-                        subsecondval = subsecondvalandtz
+-                if tzval is not None:
+-                    tz = TomlTz(tzval)
+-                microsecond = int(int(subsecondval) *
+-                                  (10 ** (6 - len(subsecondval))))
+-            else:
+-                tz = TomlTz(val[19:])
+-    except ValueError:
+-        tz = None
+-    if "-" not in val[1:]:
+-        return None
+-    try:
+-        if len(val) == 10:
+-            d = datetime.date(
+-                int(val[:4]), int(val[5:7]),
+-                int(val[8:10]))
+-        else:
+-            d = datetime.datetime(
+-                int(val[:4]), int(val[5:7]),
+-                int(val[8:10]), int(val[11:13]),
+-                int(val[14:16]), int(val[17:19]), microsecond, tz)
+-    except ValueError:
+-        return None
+-    return d
+-
+-
+-def _load_unicode_escapes(v, hexbytes, prefix):
+-    skip = False
+-    i = len(v) - 1
+-    while i > -1 and v[i] == '\\':
+-        skip = not skip
+-        i -= 1
+-    for hx in hexbytes:
+-        if skip:
+-            skip = False
+-            i = len(hx) - 1
+-            while i > -1 and hx[i] == '\\':
+-                skip = not skip
+-                i -= 1
+-            v += prefix
+-            v += hx
+-            continue
+-        hxb = ""
+-        i = 0
+-        hxblen = 4
+-        if prefix == "\\U":
+-            hxblen = 8
+-        hxb = ''.join(hx[i:i + hxblen]).lower()
+-        if hxb.strip('0123456789abcdef'):
+-            raise ValueError("Invalid escape sequence: " + hxb)
+-        if hxb[0] == "d" and hxb[1].strip('01234567'):
+-            raise ValueError("Invalid escape sequence: " + hxb +
+-                             ". Only scalar unicode points are allowed.")
+-        v += unichr(int(hxb, 16))
+-        v += unicode(hx[len(hxb):])
+-    return v
+-
+-
+-# Unescape TOML string values.
+-
+-# content after the \
+-_escapes = ['0', 'b', 'f', 'n', 'r', 't', '"']
+-# What it should be replaced by
+-_escapedchars = ['\0', '\b', '\f', '\n', '\r', '\t', '\"']
+-# Used for substitution
+-_escape_to_escapedchars = dict(zip(_escapes, _escapedchars))
+-
+-
+-def _unescape(v):
+-    """Unescape characters in a TOML string."""
+-    i = 0
+-    backslash = False
+-    while i < len(v):
+-        if backslash:
+-            backslash = False
+-            if v[i] in _escapes:
+-                v = v[:i - 1] + _escape_to_escapedchars[v[i]] + v[i + 1:]
+-            elif v[i] == '\\':
+-                v = v[:i - 1] + v[i:]
+-            elif v[i] == 'u' or v[i] == 'U':
+-                i += 1
+-            else:
+-                raise ValueError("Reserved escape sequence used")
+-            continue
+-        elif v[i] == '\\':
+-            backslash = True
+-        i += 1
+-    return v
+-
+-
+-class InlineTableDict(object):
+-    """Sentinel subclass of dict for inline tables."""
+-
+-
+-class TomlDecoder(object):
+-
+-    def __init__(self, _dict=dict):
+-        self._dict = _dict
+-
+-    def get_empty_table(self):
+-        return self._dict()
+-
+-    def get_empty_inline_table(self):
+-        class DynamicInlineTableDict(self._dict, InlineTableDict):
+-            """Concrete sentinel subclass for inline tables.
+-            It is a subclass of _dict which is passed in dynamically at load
+-            time
+-
+-            It is also a subclass of InlineTableDict
+-            """
+-
+-        return DynamicInlineTableDict()
+-
+-    def load_inline_object(self, line, currentlevel, multikey=False,
+-                           multibackslash=False):
+-        candidate_groups = line[1:-1].split(",")
+-        groups = []
+-        if len(candidate_groups) == 1 and not candidate_groups[0].strip():
+-            candidate_groups.pop()
+-        while len(candidate_groups) > 0:
+-            candidate_group = candidate_groups.pop(0)
+-            try:
+-                _, value = candidate_group.split('=', 1)
+-            except ValueError:
+-                raise ValueError("Invalid inline table encountered")
+-            value = value.strip()
+-            if ((value[0] == value[-1] and value[0] in ('"', "'")) or (
+-                    value[0] in '-0123456789' or
+-                    value in ('true', 'false') or
+-                    (value[0] == "[" and value[-1] == "]") or
+-                    (value[0] == '{' and value[-1] == '}'))):
+-                groups.append(candidate_group)
+-            elif len(candidate_groups) > 0:
+-                candidate_groups[0] = (candidate_group + "," +
+-                                       candidate_groups[0])
+-            else:
+-                raise ValueError("Invalid inline table value encountered")
+-        for group in groups:
+-            status = self.load_line(group, currentlevel, multikey,
+-                                    multibackslash)
+-            if status is not None:
+-                break
+-
+-    def _get_split_on_quotes(self, line):
+-        doublequotesplits = line.split('"')
+-        quoted = False
+-        quotesplits = []
+-        if len(doublequotesplits) > 1 and "'" in doublequotesplits[0]:
+-            singlequotesplits = doublequotesplits[0].split("'")
+-            doublequotesplits = doublequotesplits[1:]
+-            while len(singlequotesplits) % 2 == 0 and len(doublequotesplits):
+-                singlequotesplits[-1] += '"' + doublequotesplits[0]
+-                doublequotesplits = doublequotesplits[1:]
+-                if "'" in singlequotesplits[-1]:
+-                    singlequotesplits = (singlequotesplits[:-1] +
+-                                         singlequotesplits[-1].split("'"))
+-            quotesplits += singlequotesplits
+-        for doublequotesplit in doublequotesplits:
+-            if quoted:
+-                quotesplits.append(doublequotesplit)
+-            else:
+-                quotesplits += doublequotesplit.split("'")
+-                quoted = not quoted
+-        return quotesplits
+-
+-    def load_line(self, line, currentlevel, multikey, multibackslash):
+-        i = 1
+-        quotesplits = self._get_split_on_quotes(line)
+-        quoted = False
+-        for quotesplit in quotesplits:
+-            if not quoted and '=' in quotesplit:
+-                break
+-            i += quotesplit.count('=')
+-            quoted = not quoted
+-        pair = line.split('=', i)
+-        strictly_valid = _strictly_valid_num(pair[-1])
+-        if _number_with_underscores.match(pair[-1]):
+-            pair[-1] = pair[-1].replace('_', '')
+-        while len(pair[-1]) and (pair[-1][0] != ' ' and pair[-1][0] != '\t' and
+-                                 pair[-1][0] != "'" and pair[-1][0] != '"' and
+-                                 pair[-1][0] != '[' and pair[-1][0] != '{' and
+-                                 pair[-1].strip() != 'true' and
+-                                 pair[-1].strip() != 'false'):
+-            try:
+-                float(pair[-1])
+-                break
+-            except ValueError:
+-                pass
+-            if _load_date(pair[-1]) is not None:
+-                break
+-            if TIME_RE.match(pair[-1]):
+-                break
+-            i += 1
+-            prev_val = pair[-1]
+-            pair = line.split('=', i)
+-            if prev_val == pair[-1]:
+-                raise ValueError("Invalid date or number")
+-            if strictly_valid:
+-                strictly_valid = _strictly_valid_num(pair[-1])
+-        pair = ['='.join(pair[:-1]).strip(), pair[-1].strip()]
+-        if '.' in pair[0]:
+-            if '"' in pair[0] or "'" in pair[0]:
+-                quotesplits = self._get_split_on_quotes(pair[0])
+-                quoted = False
+-                levels = []
+-                for quotesplit in quotesplits:
+-                    if quoted:
+-                        levels.append(quotesplit)
+-                    else:
+-                        levels += [level.strip() for level in
+-                                   quotesplit.split('.')]
+-                    quoted = not quoted
+-            else:
+-                levels = pair[0].split('.')
+-            while levels[-1] == "":
+-                levels = levels[:-1]
+-            for level in levels[:-1]:
+-                if level == "":
+-                    continue
+-                if level not in currentlevel:
+-                    currentlevel[level] = self.get_empty_table()
+-                currentlevel = currentlevel[level]
+-            pair[0] = levels[-1].strip()
+-        elif (pair[0][0] == '"' or pair[0][0] == "'") and \
+-                (pair[0][-1] == pair[0][0]):
+-            pair[0] = _unescape(pair[0][1:-1])
+-        k, koffset = self._load_line_multiline_str(pair[1])
+-        if k > -1:
+-            while k > -1 and pair[1][k + koffset] == '\\':
+-                multibackslash = not multibackslash
+-                k -= 1
+-            if multibackslash:
+-                multilinestr = pair[1][:-1]
+-            else:
+-                multilinestr = pair[1] + "\n"
+-            multikey = pair[0]
+-        else:
+-            value, vtype = self.load_value(pair[1], strictly_valid)
+-        try:
+-            currentlevel[pair[0]]
+-            raise ValueError("Duplicate keys!")
+-        except TypeError:
+-            raise ValueError("Duplicate keys!")
+-        except KeyError:
+-            if multikey:
+-                return multikey, multilinestr, multibackslash
+-            else:
+-                currentlevel[pair[0]] = value
+-
+-    def _load_line_multiline_str(self, p):
+-        poffset = 0
+-        if len(p) < 3:
+-            return -1, poffset
+-        if p[0] == '[' and (p.strip()[-1] != ']' and
+-                            self._load_array_isstrarray(p)):
+-            newp = p[1:].strip().split(',')
+-            while len(newp) > 1 and newp[-1][0] != '"' and newp[-1][0] != "'":
+-                newp = newp[:-2] + [newp[-2] + ',' + newp[-1]]
+-            newp = newp[-1]
+-            poffset = len(p) - len(newp)
+-            p = newp
+-        if p[0] != '"' and p[0] != "'":
+-            return -1, poffset
+-        if p[1] != p[0] or p[2] != p[0]:
+-            return -1, poffset
+-        if len(p) > 5 and p[-1] == p[0] and p[-2] == p[0] and p[-3] == p[0]:
+-            return -1, poffset
+-        return len(p) - 1, poffset
+-
+-    def load_value(self, v, strictly_valid=True):
+-        if not v:
+-            raise ValueError("Empty value is invalid")
+-        if v == 'true':
+-            return (True, "bool")
+-        elif v == 'false':
+-            return (False, "bool")
+-        elif v[0] == '"' or v[0] == "'":
+-            quotechar = v[0]
+-            testv = v[1:].split(quotechar)
+-            triplequote = False
+-            triplequotecount = 0
+-            if len(testv) > 1 and testv[0] == '' and testv[1] == '':
+-                testv = testv[2:]
+-                triplequote = True
+-            closed = False
+-            for tv in testv:
+-                if tv == '':
+-                    if triplequote:
+-                        triplequotecount += 1
+-                    else:
+-                        closed = True
+-                else:
+-                    oddbackslash = False
+-                    try:
+-                        i = -1
+-                        j = tv[i]
+-                        while j == '\\':
+-                            oddbackslash = not oddbackslash
+-                            i -= 1
+-                            j = tv[i]
+-                    except IndexError:
+-                        pass
+-                    if not oddbackslash:
+-                        if closed:
+-                            raise ValueError("Found tokens after a closed " +
+-                                             "string. Invalid TOML.")
+-                        else:
+-                            if not triplequote or triplequotecount > 1:
+-                                closed = True
+-                            else:
+-                                triplequotecount = 0
+-            if quotechar == '"':
+-                escapeseqs = v.split('\\')[1:]
+-                backslash = False
+-                for i in escapeseqs:
+-                    if i == '':
+-                        backslash = not backslash
+-                    else:
+-                        if i[0] not in _escapes and (i[0] != 'u' and
+-                                                     i[0] != 'U' and
+-                                                     not backslash):
+-                            raise ValueError("Reserved escape sequence used")
+-                        if backslash:
+-                            backslash = False
+-                for prefix in ["\\u", "\\U"]:
+-                    if prefix in v:
+-                        hexbytes = v.split(prefix)
+-                        v = _load_unicode_escapes(hexbytes[0], hexbytes[1:],
+-                                                  prefix)
+-                v = _unescape(v)
+-            if len(v) > 1 and v[1] == quotechar and (len(v) < 3 or
+-                                                     v[1] == v[2]):
+-                v = v[2:-2]
+-            return (v[1:-1], "str")
+-        elif v[0] == '[':
+-            return (self.load_array(v), "array")
+-        elif v[0] == '{':
+-            inline_object = self.get_empty_inline_table()
+-            self.load_inline_object(v, inline_object)
+-            return (inline_object, "inline_object")
+-        elif TIME_RE.match(v):
+-            h, m, s, _, ms = TIME_RE.match(v).groups()
+-            time = datetime.time(int(h), int(m), int(s), int(ms) if ms else 0)
+-            return (time, "time")
+-        else:
+-            parsed_date = _load_date(v)
+-            if parsed_date is not None:
+-                return (parsed_date, "date")
+-            if not strictly_valid:
+-                raise ValueError("Weirdness with leading zeroes or "
+-                                 "underscores in your number.")
+-            itype = "int"
+-            neg = False
+-            if v[0] == '-':
+-                neg = True
+-                v = v[1:]
+-            elif v[0] == '+':
+-                v = v[1:]
+-            v = v.replace('_', '')
+-            lowerv = v.lower()
+-            if '.' in v or ('x' not in v and ('e' in v or 'E' in v)):
+-                if '.' in v and v.split('.', 1)[1] == '':
+-                    raise ValueError("This float is missing digits after "
+-                                     "the point")
+-                if v[0] not in '0123456789':
+-                    raise ValueError("This float doesn't have a leading "
+-                                     "digit")
+-                v = float(v)
+-                itype = "float"
+-            elif len(lowerv) == 3 and (lowerv == 'inf' or lowerv == 'nan'):
+-                v = float(v)
+-                itype = "float"
+-            if itype == "int":
+-                v = int(v, 0)
+-            if neg:
+-                return (0 - v, itype)
+-            return (v, itype)
+-
+-    def bounded_string(self, s):
+-        if len(s) == 0:
+-            return True
+-        if s[-1] != s[0]:
+-            return False
+-        i = -2
+-        backslash = False
+-        while len(s) + i > 0:
+-            if s[i] == "\\":
+-                backslash = not backslash
+-                i -= 1
+-            else:
+-                break
+-        return not backslash
+-
+-    def _load_array_isstrarray(self, a):
+-        a = a[1:-1].strip()
+-        if a != '' and (a[0] == '"' or a[0] == "'"):
+-            return True
+-        return False
+-
+-    def load_array(self, a):
+-        atype = None
+-        retval = []
+-        a = a.strip()
+-        if '[' not in a[1:-1] or "" != a[1:-1].split('[')[0].strip():
+-            strarray = self._load_array_isstrarray(a)
+-            if not a[1:-1].strip().startswith('{'):
+-                a = a[1:-1].split(',')
+-            else:
+-                # a is an inline object, we must find the matching parenthesis
+-                # to define groups
+-                new_a = []
+-                start_group_index = 1
+-                end_group_index = 2
+-                open_bracket_count = 1 if a[start_group_index] == '{' else 0
+-                in_str = False
+-                while end_group_index < len(a[1:]):
+-                    if a[end_group_index] == '"' or a[end_group_index] == "'":
+-                        if in_str:
+-                            backslash_index = end_group_index - 1
+-                            while (backslash_index > -1 and
+-                                   a[backslash_index] == '\\'):
+-                                in_str = not in_str
+-                                backslash_index -= 1
+-                        in_str = not in_str
+-                    if not in_str and a[end_group_index] == '{':
+-                        open_bracket_count += 1
+-                    if in_str or a[end_group_index] != '}':
+-                        end_group_index += 1
+-                        continue
+-                    elif a[end_group_index] == '}' and open_bracket_count > 1:
+-                        open_bracket_count -= 1
+-                        end_group_index += 1
+-                        continue
+-
+-                    # Increase end_group_index by 1 to get the closing bracket
+-                    end_group_index += 1
+-
+-                    new_a.append(a[start_group_index:end_group_index])
+-
+-                    # The next start index is at least after the closing
+-                    # bracket, a closing bracket can be followed by a comma
+-                    # since we are in an array.
+-                    start_group_index = end_group_index + 1
+-                    while (start_group_index < len(a[1:]) and
+-                           a[start_group_index] != '{'):
+-                        start_group_index += 1
+-                    end_group_index = start_group_index + 1
+-                a = new_a
+-            b = 0
+-            if strarray:
+-                while b < len(a) - 1:
+-                    ab = a[b].strip()
+-                    while (not self.bounded_string(ab) or
+-                           (len(ab) > 2 and
+-                            ab[0] == ab[1] == ab[2] and
+-                            ab[-2] != ab[0] and
+-                            ab[-3] != ab[0])):
+-                        a[b] = a[b] + ',' + a[b + 1]
+-                        ab = a[b].strip()
+-                        if b < len(a) - 2:
+-                            a = a[:b + 1] + a[b + 2:]
+-                        else:
+-                            a = a[:b + 1]
+-                    b += 1
+-        else:
+-            al = list(a[1:-1])
+-            a = []
+-            openarr = 0
+-            j = 0
+-            for i in _range(len(al)):
+-                if al[i] == '[':
+-                    openarr += 1
+-                elif al[i] == ']':
+-                    openarr -= 1
+-                elif al[i] == ',' and not openarr:
+-                    a.append(''.join(al[j:i]))
+-                    j = i + 1
+-            a.append(''.join(al[j:]))
+-        for i in _range(len(a)):
+-            a[i] = a[i].strip()
+-            if a[i] != '':
+-                nval, ntype = self.load_value(a[i])
+-                if atype:
+-                    if ntype != atype:
+-                        raise ValueError("Not a homogeneous array")
+-                else:
+-                    atype = ntype
+-                retval.append(nval)
+-        return retval
+-
+-    def preserve_comment(self, line_no, key, comment, beginline):
+-        pass
+-
+-    def embed_comments(self, idx, currentlevel):
+-        pass
+-
+-
+-class TomlPreserveCommentDecoder(TomlDecoder):
+-
+-    def __init__(self, _dict=dict):
+-        self.saved_comments = {}
+-        super(TomlPreserveCommentDecoder, self).__init__(_dict)
+-
+-    def preserve_comment(self, line_no, key, comment, beginline):
+-        self.saved_comments[line_no] = (key, comment, beginline)
+-
+-    def embed_comments(self, idx, currentlevel):
+-        if idx not in self.saved_comments:
+-            return
+-
+-        key, comment, beginline = self.saved_comments[idx]
+-        currentlevel[key] = CommentValue(currentlevel[key], comment, beginline,
+-                                         self._dict)
+diff --git a/dynaconf/vendor_src/toml/encoder.py b/dynaconf/vendor_src/toml/encoder.py
+deleted file mode 100644
+index f908f27..0000000
+--- a/dynaconf/vendor_src/toml/encoder.py
++++ /dev/null
+@@ -1,304 +0,0 @@
+-import datetime
+-import re
+-import sys
+-from decimal import Decimal
+-
+-from .decoder import InlineTableDict
+-
+-if sys.version_info >= (3,):
+-    unicode = str
+-
+-
+-def dump(o, f, encoder=None):
+-    """Writes out dict as toml to a file
+-
+-    Args:
+-        o: Object to dump into toml
+-        f: File descriptor where the toml should be stored
+-        encoder: The ``TomlEncoder`` to use for constructing the output string
+-
+-    Returns:
+-        String containing the toml corresponding to dictionary
+-
+-    Raises:
+-        TypeError: When anything other than file descriptor is passed
+-    """
+-
+-    if not f.write:
+-        raise TypeError("You can only dump an object to a file descriptor")
+-    d = dumps(o, encoder=encoder)
+-    f.write(d)
+-    return d
+-
+-
+-def dumps(o, encoder=None):
+-    """Stringifies input dict as toml
+-
+-    Args:
+-        o: Object to dump into toml
+-        encoder: The ``TomlEncoder`` to use for constructing the output string
+-
+-    Returns:
+-        String containing the toml corresponding to dict
+-
+-    Examples:
+-        ```python
+-        >>> import toml
+-        >>> output = {
+-        ... 'a': "I'm a string",
+-        ... 'b': ["I'm", "a", "list"],
+-        ... 'c': 2400
+-        ... }
+-        >>> toml.dumps(output)
+-        'a = "I\'m a string"\nb = [ "I\'m", "a", "list",]\nc = 2400\n'
+-        ```
+-    """
+-
+-    retval = ""
+-    if encoder is None:
+-        encoder = TomlEncoder(o.__class__)
+-    addtoretval, sections = encoder.dump_sections(o, "")
+-    retval += addtoretval
+-    outer_objs = [id(o)]
+-    while sections:
+-        section_ids = [id(section) for section in sections]
+-        for outer_obj in outer_objs:
+-            if outer_obj in section_ids:
+-                raise ValueError("Circular reference detected")
+-        outer_objs += section_ids
+-        newsections = encoder.get_empty_table()
+-        for section in sections:
+-            addtoretval, addtosections = encoder.dump_sections(
+-                sections[section], section)
+-
+-            if addtoretval or (not addtoretval and not addtosections):
+-                if retval and retval[-2:] != "\n\n":
+-                    retval += "\n"
+-                retval += "[" + section + "]\n"
+-                if addtoretval:
+-                    retval += addtoretval
+-            for s in addtosections:
+-                newsections[section + "." + s] = addtosections[s]
+-        sections = newsections
+-    return retval
+-
+-
+-def _dump_str(v):
+-    if sys.version_info < (3,) and hasattr(v, 'decode') and isinstance(v, str):
+-        v = v.decode('utf-8')
+-    v = "%r" % v
+-    if v[0] == 'u':
+-        v = v[1:]
+-    singlequote = v.startswith("'")
+-    if singlequote or v.startswith('"'):
+-        v = v[1:-1]
+-    if singlequote:
+-        v = v.replace("\\'", "'")
+-        v = v.replace('"', '\\"')
+-    v = v.split("\\x")
+-    while len(v) > 1:
+-        i = -1
+-        if not v[0]:
+-            v = v[1:]
+-        v[0] = v[0].replace("\\\\", "\\")
+-        # No, I don't know why != works and == breaks
+-        joinx = v[0][i] != "\\"
+-        while v[0][:i] and v[0][i] == "\\":
+-            joinx = not joinx
+-            i -= 1
+-        if joinx:
+-            joiner = "x"
+-        else:
+-            joiner = "u00"
+-        v = [v[0] + joiner + v[1]] + v[2:]
+-    return unicode('"' + v[0] + '"')
+-
+-
+-def _dump_float(v):
+-    return "{}".format(v).replace("e+0", "e+").replace("e-0", "e-")
+-
+-
+-def _dump_time(v):
+-    utcoffset = v.utcoffset()
+-    if utcoffset is None:
+-        return v.isoformat()
+-    # The TOML norm specifies that it's local time thus we drop the offset
+-    return v.isoformat()[:-6]
+-
+-
+-class TomlEncoder(object):
+-
+-    def __init__(self, _dict=dict, preserve=False):
+-        self._dict = _dict
+-        self.preserve = preserve
+-        self.dump_funcs = {
+-            str: _dump_str,
+-            unicode: _dump_str,
+-            list: self.dump_list,
+-            bool: lambda v: unicode(v).lower(),
+-            int: lambda v: v,
+-            float: _dump_float,
+-            Decimal: _dump_float,
+-            datetime.datetime: lambda v: v.isoformat().replace('+00:00', 'Z'),
+-            datetime.time: _dump_time,
+-            datetime.date: lambda v: v.isoformat()
+-        }
+-
+-    def get_empty_table(self):
+-        return self._dict()
+-
+-    def dump_list(self, v):
+-        retval = "["
+-        for u in v:
+-            retval += " " + unicode(self.dump_value(u)) + ","
+-        retval += "]"
+-        return retval
+-
+-    def dump_inline_table(self, section):
+-        """Preserve inline table in its compact syntax instead of expanding
+-        into subsection.
+-
+-        https://github.com/toml-lang/toml#user-content-inline-table
+-        """
+-        retval = ""
+-        if isinstance(section, dict):
+-            val_list = []
+-            for k, v in section.items():
+-                val = self.dump_inline_table(v)
+-                val_list.append(k + " = " + val)
+-            retval += "{ " + ", ".join(val_list) + " }\n"
+-            return retval
+-        else:
+-            return unicode(self.dump_value(section))
+-
+-    def dump_value(self, v):
+-        # Lookup function corresponding to v's type
+-        dump_fn = self.dump_funcs.get(type(v))
+-        if dump_fn is None and hasattr(v, '__iter__'):
+-            dump_fn = self.dump_funcs[list]
+-        # Evaluate function (if it exists) else return v
+-        return dump_fn(v) if dump_fn is not None else self.dump_funcs[str](v)
+-
+-    def dump_sections(self, o, sup):
+-        retstr = ""
+-        if sup != "" and sup[-1] != ".":
+-            sup += '.'
+-        retdict = self._dict()
+-        arraystr = ""
+-        for section in o:
+-            section = unicode(section)
+-            qsection = section
+-            if not re.match(r'^[A-Za-z0-9_-]+$', section):
+-                qsection = _dump_str(section)
+-            if not isinstance(o[section], dict):
+-                arrayoftables = False
+-                if isinstance(o[section], list):
+-                    for a in o[section]:
+-                        if isinstance(a, dict):
+-                            arrayoftables = True
+-                if arrayoftables:
+-                    for a in o[section]:
+-                        arraytabstr = "\n"
+-                        arraystr += "[[" + sup + qsection + "]]\n"
+-                        s, d = self.dump_sections(a, sup + qsection)
+-                        if s:
+-                            if s[0] == "[":
+-                                arraytabstr += s
+-                            else:
+-                                arraystr += s
+-                        while d:
+-                            newd = self._dict()
+-                            for dsec in d:
+-                                s1, d1 = self.dump_sections(d[dsec], sup +
+-                                                            qsection + "." +
+-                                                            dsec)
+-                                if s1:
+-                                    arraytabstr += ("[" + sup + qsection +
+-                                                    "." + dsec + "]\n")
+-                                    arraytabstr += s1
+-                                for s1 in d1:
+-                                    newd[dsec + "." + s1] = d1[s1]
+-                            d = newd
+-                        arraystr += arraytabstr
+-                else:
+-                    if o[section] is not None:
+-                        retstr += (qsection + " = " +
+-                                   unicode(self.dump_value(o[section])) + '\n')
+-            elif self.preserve and isinstance(o[section], InlineTableDict):
+-                retstr += (qsection + " = " +
+-                           self.dump_inline_table(o[section]))
+-            else:
+-                retdict[qsection] = o[section]
+-        retstr += arraystr
+-        return (retstr, retdict)
+-
+-
+-class TomlPreserveInlineDictEncoder(TomlEncoder):
+-
+-    def __init__(self, _dict=dict):
+-        super(TomlPreserveInlineDictEncoder, self).__init__(_dict, True)
+-
+-
+-class TomlArraySeparatorEncoder(TomlEncoder):
+-
+-    def __init__(self, _dict=dict, preserve=False, separator=","):
+-        super(TomlArraySeparatorEncoder, self).__init__(_dict, preserve)
+-        if separator.strip() == "":
+-            separator = "," + separator
+-        elif separator.strip(' \t\n\r,'):
+-            raise ValueError("Invalid separator for arrays")
+-        self.separator = separator
+-
+-    def dump_list(self, v):
+-        t = []
+-        retval = "["
+-        for u in v:
+-            t.append(self.dump_value(u))
+-        while t != []:
+-            s = []
+-            for u in t:
+-                if isinstance(u, list):
+-                    for r in u:
+-                        s.append(r)
+-                else:
+-                    retval += " " + unicode(u) + self.separator
+-            t = s
+-        retval += "]"
+-        return retval
+-
+-
+-class TomlNumpyEncoder(TomlEncoder):
+-
+-    def __init__(self, _dict=dict, preserve=False):
+-        import numpy as np
+-        super(TomlNumpyEncoder, self).__init__(_dict, preserve)
+-        self.dump_funcs[np.float16] = _dump_float
+-        self.dump_funcs[np.float32] = _dump_float
+-        self.dump_funcs[np.float64] = _dump_float
+-        self.dump_funcs[np.int16] = self._dump_int
+-        self.dump_funcs[np.int32] = self._dump_int
+-        self.dump_funcs[np.int64] = self._dump_int
+-
+-    def _dump_int(self, v):
+-        return "{}".format(int(v))
+-
+-
+-class TomlPreserveCommentEncoder(TomlEncoder):
+-
+-    def __init__(self, _dict=dict, preserve=False):
+-        from dynaconf.vendor.toml.decoder import CommentValue
+-        super(TomlPreserveCommentEncoder, self).__init__(_dict, preserve)
+-        self.dump_funcs[CommentValue] = lambda v: v.dump(self.dump_value)
+-
+-
+-class TomlPathlibEncoder(TomlEncoder):
+-
+-    def _dump_pathlib_path(self, v):
+-        return _dump_str(str(v))
+-
+-    def dump_value(self, v):
+-        if (3, 4) <= sys.version_info:
+-            import pathlib
+-            if isinstance(v, pathlib.PurePath):
+-                v = str(v)
+-        return super(TomlPathlibEncoder, self).dump_value(v)
+diff --git a/dynaconf/vendor_src/toml/ordered.py b/dynaconf/vendor_src/toml/ordered.py
+deleted file mode 100644
+index 6b8d9c1..0000000
+--- a/dynaconf/vendor_src/toml/ordered.py
++++ /dev/null
+@@ -1,15 +0,0 @@
+-from collections import OrderedDict
+-from . import TomlEncoder
+-from . import TomlDecoder
+-
+-
+-class TomlOrderedDecoder(TomlDecoder):
+-
+-    def __init__(self):
+-        super(self.__class__, self).__init__(_dict=OrderedDict)
+-
+-
+-class TomlOrderedEncoder(TomlEncoder):
+-
+-    def __init__(self):
+-        super(self.__class__, self).__init__(_dict=OrderedDict)
+diff --git a/dynaconf/vendor_src/toml/tz.py b/dynaconf/vendor_src/toml/tz.py
+deleted file mode 100644
+index 93c3c8a..0000000
+--- a/dynaconf/vendor_src/toml/tz.py
++++ /dev/null
+@@ -1,21 +0,0 @@
+-from datetime import tzinfo, timedelta
+-
+-
+-class TomlTz(tzinfo):
+-    def __init__(self, toml_offset):
+-        if toml_offset == "Z":
+-            self._raw_offset = "+00:00"
+-        else:
+-            self._raw_offset = toml_offset
+-        self._sign = -1 if self._raw_offset[0] == '-' else 1
+-        self._hours = int(self._raw_offset[1:3])
+-        self._minutes = int(self._raw_offset[4:6])
+-
+-    def tzname(self, dt):
+-        return "UTC" + self._raw_offset
+-
+-    def utcoffset(self, dt):
+-        return self._sign * timedelta(hours=self._hours, minutes=self._minutes)
+-
+-    def dst(self, dt):
+-        return timedelta(0)
+diff --git a/dynaconf/vendor_src/vendor.txt b/dynaconf/vendor_src/vendor.txt
+index add308d..daa2b60 100644
+--- a/dynaconf/vendor_src/vendor.txt
++++ b/dynaconf/vendor_src/vendor.txt
+@@ -1,5 +1 @@
+ python-box==4.2.3
+-toml==0.10.8
+-click==7.1.x
+-python-dotenv==0.13.0
+-ruamel.yaml==0.16.10
+diff --git a/tests/test_cli.py b/tests/test_cli.py
+index 9338851..726b009 100644
+--- a/tests/test_cli.py
++++ b/tests/test_cli.py
+@@ -11,7 +11,7 @@ from dynaconf.cli import main
+ from dynaconf.cli import read_file_in_root_directory
+ from dynaconf.cli import WRITERS
+ from dynaconf.utils.files import read_file
+-from dynaconf.vendor.click.testing import CliRunner
++from click.testing import CliRunner
+ 
+ 
+ runner = CliRunner()
+-- 
+2.32.0
+
diff --git a/gnu/packages/python-xyz.scm b/gnu/packages/python-xyz.scm
index c7f91dd977..d010956e7e 100644
--- a/gnu/packages/python-xyz.scm
+++ b/gnu/packages/python-xyz.scm
@@ -133,6 +133,7 @@
   #:use-module (gnu packages crypto)
   #:use-module (gnu packages databases)
   #:use-module (gnu packages dbm)
+  #:use-module (gnu packages django)
   #:use-module (gnu packages djvu)
   #:use-module (gnu packages docker)
   #:use-module (gnu packages enchant)
@@ -12507,6 +12508,16 @@ text.")
    (home-page "https://pypi.org/project/colorama/")
    (license license:bsd-3)))
 
+(define-public python-colorama-0.4.1
+  (package (inherit python-colorama)
+    (version "0.4.1")
+    (source
+     (origin
+       (method url-fetch)
+       (uri (pypi-uri "colorama" version))
+       (sha256
+        (base32 "0ba247bx5pc60hcpbf3rjsqk0whilg241i9qdfnlcwij5qgdgvh5"))))))
+
 (define-public python2-colorama
   (package-with-python2 python-colorama))
 
@@ -26145,6 +26156,18 @@ representing paths or filenames.")
 read key-value pairs from a .env file and set them as environment variables")
     (license license:bsd-3)))
 
+(define-public python-dotenv-0.13.0
+  (package (inherit python-dotenv)
+    (name "python-dotenv")
+    (version "0.13.0")
+    (source
+     (origin
+       (method url-fetch)
+       (uri (pypi-uri "python-dotenv" version))
+       (sha256
+        (base32
+         "0x5dagmfn31phrbxlwacw3s4w5vibv8fxqc62nqcdvdhjsy0k69v"))))))
+
 (define-public python-box
   (package
     (name "python-box")
@@ -26168,3 +26191,67 @@ read key-value pairs from a .env file and set them as environment variables")
      "This package provides the @code{python-box} Python module.
 It implements advanced Python dictionaries with dot notation access.")
     (license license:expat)))
+
+(define-public dynaconf
+  (package
+    (name "dynaconf")
+    (version "3.1.4")
+    (source
+     (origin
+       (method git-fetch)
+       (uri
+        (git-reference
+         (url "https://github.com/rochacbruno/dynaconf")
+         (commit version)))
+       (file-name (git-file-name name version))
+       (sha256
+        (base32
+         "0dafd7hb691g6s3yjfvl5gph5md73n6g9j44kjpbnbbilr5pc85g"))
+       (patches (search-patches "dynaconf-Unvendor-dependencies.patch"))))
+    (build-system python-build-system)
+    (arguments
+     `(#:phases
+       (modify-phases %standard-phases
+         (replace 'check
+           (lambda* (#:key tests? outputs #:allow-other-keys)
+             (when tests?
+               (setenv "PATH"
+                       (string-append (assoc-ref outputs "out") "/bin:"
+                                      (getenv "PATH")))
+               ;; These tests depend on hvac and a
+               ;; live Vault process.
+               (delete-file "tests/test_vault.py")
+               (invoke "make" "test_only"))
+             #t)))))
+    (propagated-inputs
+     `(("python-click" ,python-click)
+       ("python-dotenv" ,python-dotenv-0.13.0)
+       ("python-ruamel.yaml" ,python-ruamel.yaml)
+       ("python-toml" ,python-toml)))
+    (native-inputs
+     `(("make" ,gnu-make)
+       ("python-codecov" ,python-codecov)
+       ("python-configobj" ,python-configobj)
+       ("python-colorama" ,python-colorama-0.4.1)
+       ("python-django" ,python-django)
+       ("python-flake8" ,python-flake8)
+       ("python-flake8-debugger" ,python-flake8-debugger)
+       ("python-flake8-print" ,python-flake8-print)
+       ("python-flake8-todo" ,python-flake8-todo)
+       ("python-flask" ,python-flask)
+       ("python-future" ,python-future)
+       ("python-pep8-naming" ,python-pep8-naming)
+       ("python-pytest" ,python-pytest-6)
+       ("python-pytest-cov" ,python-pytest-cov)
+       ("python-pytest-forked" ,python-pytest-forked)
+       ("python-pytest-mock" ,python-pytest-mock)
+       ("python-pytest-xdist" ,python-pytest-xdist)
+       ("python-radon" ,python-radon)))
+    (home-page
+     "https://github.com/rochacbruno/dynaconf")
+    (synopsis
+     "The dynamic configurator for your Python Project")
+    (description
+     "This package provides @code{dynaconf} the dynamic configurator for
+your Python Project.")
+    (license license:expat)))
-- 
2.32.0





^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [bug#49281] Add dynaconf
  2021-06-29 22:38 [bug#49281] Add dynaconf paul
  2021-06-29 22:42 ` [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger Giacomo Leidi
@ 2021-07-23  6:14 ` Sarah Morgensen
  2021-08-02 18:13   ` paul
  1 sibling, 1 reply; 10+ messages in thread
From: Sarah Morgensen @ 2021-07-23  6:14 UTC (permalink / raw)
  To: paul; +Cc: 49281

Hello,

Thanks for the patches :) The fight against vendoring is eternal.

I have a few suggestions:

* Unvendoring or otherwise removing files from sources is typically done
  with a snippet in the origin rather than a patch, as it's much smaller
  and doesn't break when updating. It might look like (untested):

(origin
  ...
  (modules '((guix build utils)))
  (snippet
    '(begin
      ;; Remove vendored dependencies
      (let ((unvendor '("click" "dotenv" "ruamel" "toml")))
        (with-directory-excursion "dynaconf/vendor"
          (for-each delete-file-recursively unvendor))
        (with-directory-excursion "dynaconf/vendor_src"
          (for-each delete-file-recursively unvendor))))))

  You'll still have to have the edits to dynaconf as a patch, of course.

* You've still included a python-box package despite none of the
  packages in your patch using it.

* pep8-naming has released 12.0.0, and tests pass :)

* Some of your patches no longer apply on master, and you should rebase
  them before sending a revised patchset. Consider using the `--base`
  option with format-patch, which helps git know what commit the patch
  is based on when applying.

paul <goodoldpaul@autistici.org> writes:

> Hi Guixers :),
>
> I'm sending a patch series to add dynaconf.
>
> Thank you for your time,
>
> Giacomo

Best,

Sarah




^ permalink raw reply	[flat|nested] 10+ messages in thread

* [bug#49281] Add dynaconf
  2021-07-23  6:14 ` [bug#49281] " Sarah Morgensen
@ 2021-08-02 18:13   ` paul
  0 siblings, 0 replies; 10+ messages in thread
From: paul @ 2021-08-02 18:13 UTC (permalink / raw)
  To: Sarah Morgensen; +Cc: 49281

Dear Sarah,

thank you for your suggestions :D , I believe I addressed most of them.

On 7/23/21 8:14 AM, Sarah Morgensen wrote:
> * Unvendoring or otherwise removing files from sources is typically done
>    with a snippet in the origin rather than a patch, as it's much smaller
>    and doesn't break when updating. It might look like (untested):
>
> (origin
>    ...
>    (modules '((guix build utils)))
>    (snippet
>      '(begin
>        ;; Remove vendored dependencies
>        (let ((unvendor '("click" "dotenv" "ruamel" "toml")))
>          (with-directory-excursion "dynaconf/vendor"
>            (for-each delete-file-recursively unvendor))
>          (with-directory-excursion "dynaconf/vendor_src"
>            (for-each delete-file-recursively unvendor))))))
>
>    You'll still have to have the edits to dynaconf as a patch, of course.
It make much more sense, now the patch just changes the imports and the 
actual removal is up to the snippet.
> * You've still included a python-box package despite none of the
>    packages in your patch using it.
Yes I included it while unvendoring, I figured since the tests pass it 
would still make sense to upstream it. Should I remove it?
> * pep8-naming has released 12.0.0, and tests pass :)
Fixed, thanks !
> * Some of your patches no longer apply on master, and you should rebase
>    them before sending a revised patchset. Consider using the `--base`
>    option with format-patch, which helps git know what commit the patch
>    is based on when applying.

I rebased and I'll send the patches with 
--base=f12a35cfa22092a7e3157c94abfef8335f86ac1c .

Thank you for your help!


Cheers,

giacomo






^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-08-02 18:14 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-06-29 22:38 [bug#49281] Add dynaconf paul
2021-06-29 22:42 ` [bug#49281] [PATCH 1/7] gnu: Add python-flake8-debugger Giacomo Leidi
2021-06-29 22:42   ` [bug#49281] [PATCH 2/7] gnu: Add python-flake8-todo Giacomo Leidi
2021-06-29 22:42   ` [bug#49281] [PATCH 3/7] gnu: Add python-dotenv Giacomo Leidi
2021-06-29 22:42   ` [bug#49281] [PATCH 4/7] gnu: Add python-box Giacomo Leidi
2021-06-29 22:42   ` [bug#49281] [PATCH 5/7] gnu: python-ruamel.yaml: Update to 0.17.10 Giacomo Leidi
2021-06-29 22:42   ` [bug#49281] [PATCH 6/7] gnu: Add python-pep8-naming Giacomo Leidi
2021-06-29 22:42   ` [bug#49281] [PATCH 7/7] gnu: Add dynaconf Giacomo Leidi
2021-07-23  6:14 ` [bug#49281] " Sarah Morgensen
2021-08-02 18:13   ` paul

Code repositories for project(s) associated with this external index

	https://git.savannah.gnu.org/cgit/guix.git

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.