Python 3.2 有什么新变化
- 作者:
- Raymond Hettinger(译者:wh2099 at outlook dot com)
这篇文章介绍了 Python 3.2 相比 3.1 新增的特性。 Python 3.2 发布于 2011 年 2 月 20 日。 文章聚焦于几个关键特性并给出了一些示例。 有关完整细节,请参阅 Misc/NEWS [https://github.com/python/cpython/blob/076ca6c3c8df3030307e548d9be792ce3c1c6eea/Misc/NEWS] 文件。
参见
PEP 392 [https://peps.python.org/pep-0392/] - Python 3.2 发布计划
PEP 384: 定义稳定的ABI
过去,为一个 Python 版本所构建的扩展模块通常无法用于其他 Python 版本。 特别是在 Windows 上,每一个 Python 新特性发布版都必须重新构建想要使用的所有扩展模块。 之所以有这样的要求是因为扩展模块可以任意访问 Python 解释器的内部对象。
在 Python 3.2 中,则有了一种替代方式:扩展模块将自己约束于一个受限 API(通过定义 Py_LIMITED_API)因而不能使用许多内部对象,仅限使用一组承诺会在多个发布版中保持稳定的 API 函数。 作为其结果,在这种模式下为 3.2 构建的扩展模块也将能在 3.3、3.4 等版本中运行。 使用了内存结构体细节数据的扩展模块仍然可以被构建,但将需要为每个新特性发布版重新编译。
参见
- PEP 384 [https://peps.python.org/pep-0384/] - 定义稳定的ABI
- PEP 由 Martin von Löwis 撰写
PEP 389: Argparse 命令行解析模块
A new module for command line parsing, argparse
, was introduced to overcome the limitations of optparse
which did not provide support for positional arguments (not just options), subcommands, required options and other common patterns of specifying and validating options.
This module has already had widespread success in the community as a third-party module. Being more fully featured than its predecessor, the argparse
module is now the preferred module for command-line processing. The older module is still being kept available because of the substantial amount of legacy code that depends on it.
Here's an annotated example parser showing features like limiting results to a set of choices, specifying a metavar in the help screen, validating that one or more positional arguments is present, and making a required option:
- import argparse
- parser = argparse.ArgumentParser(
- description = 'Manage servers', # main description for help
- epilog = 'Tested on Solaris and Linux') # displayed after help
- parser.add_argument('action', # argument name
- choices = ['deploy', 'start', 'stop'], # three allowed values
- help = 'action on each target') # help msg
- parser.add_argument('targets',
- metavar = 'HOSTNAME', # var name used in help msg
- nargs = '+', # require one or more targets
- help = 'url for target machines') # help msg explanation
- parser.add_argument('-u', '--user', # -u or --user option
- required = True, # make it a required argument
- help = 'login as user')
在命令字符串中调用解析器的示例:
- >>> cmd = 'deploy sneezy.example.com sleepy.example.com -u skycaptain'
- >>> result = parser.parse_args(cmd.split())
- >>> result.action
- 'deploy'
- >>> result.targets
- ['sneezy.example.com', 'sleepy.example.com']
- >>> result.user
- 'skycaptain'
解释器自动生成的帮助示例:
- >>> parser.parse_args('-h'.split())
- usage: manage_cloud.py [-h] -u USER
- {deploy,start,stop} HOSTNAME [HOSTNAME ...]
- Manage servers
- positional arguments:
- {deploy,start,stop} action on each target
- HOSTNAME url for target machines
- optional arguments:
- -h, --help show this help message and exit
- -u USER, --user USER login as user
- Tested on Solaris and Linux
一个非常好的 argparse
特性是可以定义子解析器,每个子解析器拥有它们自己的参数模式和帮助显示:
- import argparse
- parser = argparse.ArgumentParser(prog='HELM')
- subparsers = parser.add_subparsers()
- parser_l = subparsers.add_parser('launch', help='Launch Control') # 第一个子分组
- parser_l.add_argument('-m', '--missiles', action='store_true')
- parser_l.add_argument('-t', '--torpedos', action='store_true')
- parser_m = subparsers.add_parser('move', help='Move Vessel', # 第二个子分组
- aliases=('steer', 'turn')) # 等价的名称
- parser_m.add_argument('-c', '--course', type=int, required=True)
- parser_m.add_argument('-s', '--speed', type=int, default=0)
- $ ./helm.py --help # 最高层级的帮助 (launch 和 move)
- $ ./helm.py launch --help # launch 选项的帮助
- $ ./helm.py launch --missiles # 设置 missiles=True 及 torpedos=False
- $ ./helm.py steer --course 180 --speed 5 # 设置动作形参
参见
- PEP 389 [https://peps.python.org/pep-0389/] - 新的命令行解析模块
- PEP 由 Steven Bethard 撰写
参阅 Migrating optparse code to argparse 了解与 optparse
的差异的细节。
PEP 391: 基于字典的日志配置
The logging
module provided two kinds of configuration, one style with function calls for each option or another style driven by an external file saved in a configparser
format. Those options did not provide the flexibility to create configurations from JSON or YAML files, nor did they support incremental configuration, which is needed for specifying logger options from a command line.
To support a more flexible style, the module now offers logging.config.dictConfig()
for specifying logging configuration with plain Python dictionaries. The configuration options include formatters, handlers, filters, and loggers. Here's a working example of a configuration dictionary:
- {"version": 1,
- "formatters": {"brief": {"format": "%(levelname)-8s: %(name)-15s: %(message)s"},
- "full": {"format": "%(asctime)s %(name)-15s %(levelname)-8s %(message)s"}
- },
- "handlers": {"console": {
- "class": "logging.StreamHandler",
- "formatter": "brief",
- "level": "INFO",
- "stream": "ext://sys.stdout"},
- "console_priority": {
- "class": "logging.StreamHandler",
- "formatter": "full",
- "level": "ERROR",
- "stream": "ext://sys.stderr"}
- },
- "root": {"level": "DEBUG", "handlers": ["console", "console_priority"]}}
If that dictionary is stored in a file called conf.json
, it can be loaded and called with code like this:
- >>> import json, logging.config
- >>> with open('conf.json') as f:
- ... conf = json.load(f)
- ...
- >>> logging.config.dictConfig(conf)
- >>> logging.info("Transaction completed normally")
- INFO : root : Transaction completed normally
- >>> logging.critical("Abnormal termination")
- 2011-02-17 11:14:36,694 root CRITICAL Abnormal termination
参见
- PEP 391 [https://peps.python.org/pep-0391/] - 基于字典的日志配置
- PEP 由 Vinay Sajip 撰写
PEP 3148: concurrent.futures
模块
Code for creating and managing concurrency is being collected in a new top-level namespace, concurrent. Its first member is a futures package which provides a uniform high-level interface for managing threads and processes.
The design for concurrent.futures
was inspired by the java.util.concurrent package. In that model, a running call and its result are represented by a Future
object that abstracts features common to threads, processes, and remote procedure calls. That object supports status checks (running or done), timeouts, cancellations, adding callbacks, and access to results or exceptions.
The primary offering of the new module is a pair of executor classes for launching and managing calls. The goal of the executors is to make it easier to use existing tools for making parallel calls. They save the effort needed to setup a pool of resources, launch the calls, create a results queue, add timeout handling, and limit the total number of threads, processes, or remote procedure calls.
Ideally, each application should share a single executor across multiple components so that process and thread limits can be centrally managed. This solves the design challenge that arises when each component has its own competing strategy for resource management.
Both classes share a common interface with three methods: submit()
for scheduling a callable and returning a Future
object; map()
for scheduling many asynchronous calls at a time, and shutdown()
for freeing resources. The class is a context manager and can be used in a with
statement to assure that resources are automatically released when currently pending futures are done executing.
A simple of example of ThreadPoolExecutor
is a launch of four parallel threads for copying files:
- import concurrent.futures, shutil
- with concurrent.futures.ThreadPoolExecutor(max_workers=4) as e:
- e.submit(shutil.copy, 'src1.txt', 'dest1.txt')
- e.submit(shutil.copy, 'src2.txt', 'dest2.txt')
- e.submit(shutil.copy, 'src3.txt', 'dest3.txt')
- e.submit(shutil.copy, 'src3.txt', 'dest4.txt')
参见
- PEP 3148 [https://peps.python.org/pep-3148/] — futures - 异步执行指令
- PEP 由 Brian Quinlan 撰写
Code for Threaded Parallel URL reads, an example using threads to fetch multiple web pages in parallel.
Code for computing prime numbers in parallel, an example demonstrating ProcessPoolExecutor
.
PEP 3147: PYC 仓库目录
Python's scheme for caching bytecode in .pyc files did not work well in environments with multiple Python interpreters. If one interpreter encountered a cached file created by another interpreter, it would recompile the source and overwrite the cached file, thus losing the benefits of caching.
The issue of "pyc fights" has become more pronounced as it has become commonplace for Linux distributions to ship with multiple versions of Python. These conflicts also arise with CPython alternatives such as Unladen Swallow.
To solve this problem, Python's import machinery has been extended to use distinct filenames for each interpreter. Instead of Python 3.2 and Python 3.3 and Unladen Swallow each competing for a file called "mymodule.pyc", they will now look for "mymodule.cpython-32.pyc", "mymodule.cpython-33.pyc", and "mymodule.unladen10.pyc". And to prevent all of these new files from cluttering source directories, the pyc files are now collected in a "pycache" directory stored under the package directory.
Aside from the filenames and target directories, the new scheme has a few aspects that are visible to the programmer:
- Imported modules now have a
__cached__
attribute which stores the name of the actual file that was imported:
- >>> import collections
- >>> collections.__cached__
- 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'
- The tag that is unique to each interpreter is accessible from the
imp
module:
- >>> import imp
- >>> imp.get_tag()
- 'cpython-32'
- Scripts that try to deduce source filename from the imported file now need to be smarter. It is no longer sufficient to simply strip the "c" from a ".pyc" filename. Instead, use the new functions in the
imp
module:
- >>> imp.source_from_cache('c:/py32/lib/__pycache__/collections.cpython-32.pyc')
- 'c:/py32/lib/collections.py'
- >>> imp.cache_from_source('c:/py32/lib/collections.py')
- 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'
The
py_compile
andcompileall
modules have been updated to reflect the new naming convention and target directory. The command-line invocation of compileall has new options:-i
for specifying a list of files and directories to compile and-b
which causes bytecode files to be written to their legacy location rather than pycache.The
importlib.abc
module has been updated with new abstract base classes for loading bytecode files. The obsolete ABCs,PyLoader
andPyPycLoader
, have been deprecated (instructions on how to stay Python 3.1 compatible are included with the documentation).
参见
- PEP 3147 [https://peps.python.org/pep-3147/] - PYC 仓库目录
- PEP 由 Barry Warsaw 撰写
PEP 3149: 带有 ABI 版本标签的 .so 文件
The PYC repository directory allows multiple bytecode cache files to be co-located. This PEP implements a similar mechanism for shared object files by giving them a common directory and distinct names for each version.
The common directory is "pyshared" and the file names are made distinct by identifying the Python implementation (such as CPython, PyPy, Jython, etc.), the major and minor version numbers, and optional build flags (such as "d" for debug, "m" for pymalloc, "u" for wide-unicode). For an arbitrary package "foo", you may see these files when the distribution package is installed:
- /usr/share/pyshared/foo.cpython-32m.so
- /usr/share/pyshared/foo.cpython-33md.so
对于 Python 本身,可以通过 sysconfig
模块中的函数来访问这些标签:
- >>> import sysconfig
- >>> sysconfig.get_config_var('SOABI') # 查找版本标签
- 'cpython-32mu'
- >>> sysconfig.get_config_var('EXT_SUFFIX') # 查找完整文件名扩展
- '.cpython-32mu.so'
参见
- PEP 3149 [https://peps.python.org/pep-3149/] - 带有 ABI 版本标签的 .so 文件
- PEP 由 Barry Warsaw 撰写
PEP 3333: Python Web服务器网关接口v1.0.1
This informational PEP clarifies how bytes/text issues are to be handled by the WSGI protocol. The challenge is that string handling in Python 3 is most conveniently handled with the str
type even though the HTTP protocol is itself bytes oriented.
The PEP differentiates so-called native strings that are used for request/response headers and metadata versus byte strings which are used for the bodies of requests and responses.
The native strings are always of type str
but are restricted to code points between U+0000 through U+00FF which are translatable to bytes using Latin-1 encoding. These strings are used for the keys and values in the environment dictionary and for response headers and statuses in the start_response()
function. They must follow RFC 2616 [https://datatracker.ietf.org/doc/html/rfc2616.html] with respect to encoding. That is, they must either be ISO-8859-1 characters or use RFC 2047 [https://datatracker.ietf.org/doc/html/rfc2047.html] MIME encoding.
For developers porting WSGI applications from Python 2, here are the salient points:
If the app already used strings for headers in Python 2, no change is needed.
If instead, the app encoded output headers or decoded input headers, then the headers will need to be re-encoded to Latin-1. For example, an output header encoded in utf-8 was using
h.encode('utf-8')
now needs to convert from bytes to native strings usingh.encode('utf-8').decode('latin-1')
.Values yielded by an application or sent using the
write()
method must be byte strings. Thestart_response()
function and environ must use native strings. The two cannot be mixed.
For server implementers writing CGI-to-WSGI pathways or other CGI-style protocols, the users must to be able access the environment using native strings even though the underlying platform may have a different convention. To bridge this gap, the wsgiref
module has a new function, wsgiref.handlers.read_environ()
for transcoding CGI variables from os.environ
into native strings and returning a new dictionary.
参见
- PEP 3333 [https://peps.python.org/pep-3333/] - Python Web服务器网关接口v1.0.1
- PEP 由 Phillip Eby 撰写
其他语言特性修改
对Python 语言核心进行的小改动:
- String formatting for
format()
andstr.format()
gained new capabilities for the format character #. Previously, for integers in binary, octal, or hexadecimal, it caused the output to be prefixed with '0b', '0o', or '0x' respectively. Now it can also handle floats, complex, and Decimal, causing the output to always have a decimal point even when no digits follow it.
- >>> format(20, '#o')
- '0o24'
- >>> format(12.34, '#5.0f')
- ' 12.'
(Suggested by Mark Dickinson and implemented by Eric Smith in bpo-7094 [https://bugs.python.org/issue?@action=redirect&bpo=7094].)
- There is also a new
str.format_map()
method that extends the capabilities of the existingstr.format()
method by accepting arbitrary mapping objects. This new method makes it possible to use string formatting with any of Python's many dictionary-like objects such asdefaultdict
,Shelf
,ConfigParser
, ordbm
. It is also useful with customdict
subclasses that normalize keys before look-up or that supply a__missing__()
method for unknown keys:
- >>> import shelve
- >>> d = shelve.open('tmp.shl')
- >>> 'The {project_name} status is {status} as of {date}'.format_map(d)
- 'The testing project status is green as of February 15, 2011'
- >>> class LowerCasedDict(dict):
- ... def __getitem__(self, key):
- ... return dict.__getitem__(self, key.lower())
- ...
- >>> lcd = LowerCasedDict(text-part='widgets', quantity=10)
- >>> 'There are {QUANTITY} {Part} in stock'.format_map(lcd)
- 'There are 10 widgets in stock'
- >>> class PlaceholderDict(dict):
- ... def __missing__(self, key):
- ... return '<{}>'.format(key)
- ...
- >>> 'Hello {name}, welcome to {location}'.format_map(PlaceholderDict())
- 'Hello <name>, welcome to <location>'
(由 Raymond Hettinger 提议并由 Eric Smith 在 bpo-6081 [https://bugs.python.org/issue?@action=redirect&bpo=6081] 中贡献。)
- The interpreter can now be started with a quiet option,
-q
, to prevent the copyright and version information from being displayed in the interactive mode. The option can be introspected using thesys.flags
attribute:
- $ python -q
- >>> sys.flags
- sys.flags(debug=0, division_warning=0, inspect=0, interactive=0,
- optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0,
- ignore_environment=0, verbose=0, bytes_warning=0, quiet=1)
(由 Marcin Wojdyr 在 bpo-1772833 [https://bugs.python.org/issue?@action=redirect&bpo=1772833] 中贡献。)
- The
hasattr()
function works by callinggetattr()
and detecting whether an exception is raised. This technique allows it to detect methods created dynamically by__getattr__()
or__getattribute__()
which would otherwise be absent from the class dictionary. Formerly, hasattr would catch any exception, possibly masking genuine errors. Now, hasattr has been tightened to only catchAttributeError
and let other exceptions pass through:
- >>> class A:
- ... @property
- ... def f(self):
- ... return 1 // 0
- ...
- >>> a = A()
- >>> hasattr(a, 'f')
- Traceback (most recent call last): ...
- ZeroDivisionError: integer division or modulo by zero
(由 Yury Selivanov 发现并由 Benjamin Peterson 修正;bpo-9666 [https://bugs.python.org/issue?@action=redirect&bpo=9666]。)
- The
str()
of a float or complex number is now the same as itsrepr()
. Previously, thestr()
form was shorter but that just caused confusion and is no longer needed now that the shortest possiblerepr()
is displayed by default:
- >>> import math
- >>> repr(math.pi)
- '3.141592653589793'
- >>> str(math.pi)
- '3.141592653589793'
(由 Mark Dickinson 提议并实现;bpo-9337 [https://bugs.python.org/issue?@action=redirect&bpo=9337]。)
memoryview
objects now have arelease()
method and they also now support the context management protocol. This allows timely release of any resources that were acquired when requesting a buffer from the original object.
- >>> with memoryview(b'abcdefgh') as v:
- ... print(v.tolist())
- [97, 98, 99, 100, 101, 102, 103, 104]
(由 Antoine Pitrou 添加;bpo-9757 [https://bugs.python.org/issue?@action=redirect&bpo=9757]。)
- Previously it was illegal to delete a name from the local namespace if it occurs as a free variable in a nested block:
- def outer(x):
- def inner():
- return x
- inner()
- del x
This is now allowed. Remember that the target of an except
clause is cleared, so this code which used to work with Python 2.6, raised a SyntaxError
with Python 3.1 and now works again:
- def f():
- def print_error():
- print(e)
- try:
- something
- except Exception as e:
- print_error()
- # implicit "del e" here
(参见 bpo-4617 [https://bugs.python.org/issue?@action=redirect&bpo=4617]。)
- Struct sequence types are now subclasses of tuple. This means that C structures like those returned by
os.stat()
,time.gmtime()
, andsys.version_info
now work like a named tuple and now work with functions and methods that expect a tuple as an argument. This is a big step forward in making the C structures as flexible as their pure Python counterparts:
- >>> import sys
- >>> isinstance(sys.version_info, tuple)
- True
- >>> 'Version %d.%d.%d %s(%d)' % sys.version_info
- 'Version 3.2.0 final(0)'
(Suggested by Arfrever Frehtes Taifersar Arahesis and implemented by Benjamin Peterson in bpo-8413 [https://bugs.python.org/issue?@action=redirect&bpo=8413].)
- Warnings are now easier to control using the
PYTHONWARNINGS
environment variable as an alternative to using-W
at the command line:
- $ export PYTHONWARNINGS='ignore::RuntimeWarning::,once::UnicodeWarning::'
(Suggested by Barry Warsaw and implemented by Philip Jenvey in bpo-7301 [https://bugs.python.org/issue?@action=redirect&bpo=7301].)
- A new warning category,
ResourceWarning
, has been added. It is emitted when potential issues with resource consumption or cleanup are detected. It is silenced by default in normal release builds but can be enabled through the means provided by thewarnings
module, or on the command line.
A ResourceWarning
is issued at interpreter shutdown if the gc.garbage
list isn't empty, and if gc.DEBUG_UNCOLLECTABLE
is set, all uncollectable objects are printed. This is meant to make the programmer aware that their code contains object finalization issues.
A ResourceWarning
is also issued when a file object is destroyed without having been explicitly closed. While the deallocator for such object ensures it closes the underlying operating system resource (usually, a file descriptor), the delay in deallocating the object could produce various issues, especially under Windows. Here is an example of enabling the warning from the command line:
- $ python -q -Wdefault
- >>> f = open("foo", "wb")
- >>> del f
- __main__:1: ResourceWarning: unclosed file <_io.BufferedWriter name='foo'>
(Added by Antoine Pitrou and Georg Brandl in bpo-10093 [https://bugs.python.org/issue?@action=redirect&bpo=10093] and bpo-477863 [https://bugs.python.org/issue?@action=redirect&bpo=477863].)
range
objects now support index and count methods. This is part of an effort to make more objects fully implement thecollections.Sequence
abstract base class. As a result, the language will have a more uniform API. In addition,range
objects now support slicing and negative indices, even with values larger thansys.maxsize
. This makes range more interoperable with lists:
- >>> range(0, 100, 2).count(10)
- 1
- >>> range(0, 100, 2).index(10)
- 5
- >>> range(0, 100, 2)[5]
- 10
- >>> range(0, 100, 2)[0:5]
- range(0, 10, 2)
(由 Daniel Stutzbach 在 bpo-9213 [https://bugs.python.org/issue?@action=redirect&bpo=9213] 中贡献,由 Alexander Belopolsky 在 bpo-2690 [https://bugs.python.org/issue?@action=redirect&bpo=2690] 中贡献,由 Nick Coghlan 在 bpo-10889 [https://bugs.python.org/issue?@action=redirect&bpo=10889] 中贡献。)
- The
callable()
builtin function from Py2.x was resurrected. It provides a concise, readable alternative to using an abstract base class in an expression likeisinstance(x, collections.Callable)
:
- >>> callable(max)
- True
- >>> callable(20)
- False
(参见 bpo-10518 [https://bugs.python.org/issue?@action=redirect&bpo=10518]。)
- Python's import mechanism can now load modules installed in directories with non-ASCII characters in the path name. This solved an aggravating problem with home directories for users with non-ASCII characters in their usernames.
(需要 Victor Stinner 在 bpo-9425 [https://bugs.python.org/issue?@action=redirect&bpo=9425] 中做大量工作。)
新增,改进和弃用的模块
Python 标准库经过了大量的维护工作和质量改进。
The biggest news for Python 3.2 is that the email
package, mailbox
module, and nntplib
modules now work correctly with the bytes/text model in Python 3. For the first time, there is correct handling of messages with mixed encodings.
Throughout the standard library, there has been more careful attention to encodings and text versus bytes issues. In particular, interactions with the operating system are now better able to exchange non-ASCII data using the Windows MBCS encoding, locale-aware encodings, or UTF-8.
Another significant win is the addition of substantially better support for SSL connections and security certificates.
In addition, more classes now implement a context manager to support convenient and reliable resource cleanup using a with
statement.
The usability of the email
package in Python 3 has been mostly fixed by the extensive efforts of R. David Murray. The problem was that emails are typically read and stored in the form of bytes
rather than str
text, and they may contain multiple encodings within a single email. So, the email package had to be extended to parse and generate email messages in bytes format.
New functions
message_from_bytes()
andmessage_from_binary_file()
, and new classesBytesFeedParser
andBytesParser
allow binary message data to be parsed into model objects.Given bytes input to the model,
get_payload()
will by default decode a message body that has a Content-Transfer-Encoding of 8bit using the charset specified in the MIME headers and return the resulting string.Given bytes input to the model,
Generator
will convert message bodies that have a Content-Transfer-Encoding of 8bit to instead have a 7bit Content-Transfer-Encoding.
Headers with unencoded non-ASCII bytes are deemed to be RFC 2047 [https://datatracker.ietf.org/doc/html/rfc2047.html]-encoded using the unknown-8bit character set.
A new class
BytesGenerator
produces bytes as output, preserving any unchanged non-ASCII data that was present in the input used to build the model, including message bodies with a Content-Transfer-Encoding of 8bit.The
smtplib
SMTP
class now accepts a byte string for the msg argument to thesendmail()
method, and a new method,send_message()
accepts aMessage
object and can optionally obtain the from_addr and to_addrs addresses directly from the object.
(Proposed and implemented by R. David Murray, bpo-4661 [https://bugs.python.org/issue?@action=redirect&bpo=4661] and bpo-10321 [https://bugs.python.org/issue?@action=redirect&bpo=10321].)
elementtree
The xml.etree.ElementTree
package and its xml.etree.cElementTree
counterpart have been updated to version 1.3.
新增了几个有用的函数和方法:
xml.etree.ElementTree.fromstringlist()
可根据一系列片段生成 XML 文档xml.etree.ElementTree.register_namespace()
用于注册全局命名空间前缀xml.etree.ElementTree.tostringlist()
用于字符串表示包括所有子列表xml.etree.ElementTree.Element.extend()
用于添加包含零个或多个元素的序列xml.etree.ElementTree.Element.iterfind()
可搜索元素和子元素xml.etree.ElementTree.Element.itertext()
创建一个包含指定元素及其子元素的文本迭代器。xml.etree.ElementTree.TreeBuilder.doctype()
处理 doctype 声明
两个方法被弃用:
xml.etree.ElementTree.getchildren()
uselist(elem)
instead.xml.etree.ElementTree.getiterator()
useElement.iter
instead.
For details of the update, see Introducing ElementTree [https://web.archive.org/web/20200703234532/http://effbot.org/zone/elementtree-13-intro.htm] on Fredrik Lundh's website.
(由 Florent Xicluna 和 Fredrik Lundh 在 bpo-6472 [https://bugs.python.org/issue?@action=redirect&bpo=6472] 中贡献。)
functools
- The
functools
module includes a new decorator for caching function calls.functools.lru_cache()
can save repeated queries to an external resource whenever the results are expected to be the same.
For example, adding a caching decorator to a database query function can save database accesses for popular searches:
- >>> import functools
- >>> @functools.lru_cache(maxsize=300)
- ... def get_phone_number(name):
- ... c = conn.cursor()
- ... c.execute('SELECT phonenumber FROM phonelist WHERE name=?', (name,))
- ... return c.fetchone()[0]
- >>> for name in user_requests:
- ... get_phone_number(name) # cached lookup
To help with choosing an effective cache size, the wrapped function is instrumented for tracking cache statistics:
- >>> get_phone_number.cache_info()
- CacheInfo(hits=4805, misses=980, maxsize=300, currsize=300)
If the phonelist table gets updated, the outdated contents of the cache can be cleared with:
- >>> get_phone_number.cache_clear()
(Contributed by Raymond Hettinger and incorporating design ideas from Jim Baker, Miki Tebeka, and Nick Coghlan; see recipe 498245 [https://code.activestate.com/recipes/498245-lru-and-lfu-cache-decorators/], recipe 577479 [https://code.activestate.com/recipes/577479-simple-caching-decorator/], bpo-10586 [https://bugs.python.org/issue?@action=redirect&bpo=10586], and bpo-10593 [https://bugs.python.org/issue?@action=redirect&bpo=10593].)
- The
functools.wraps()
decorator now adds a__wrapped__
attribute pointing to the original callable function. This allows wrapped functions to be introspected. It also copies__annotations__
if defined. And now it also gracefully skips over missing attributes such as__doc__
which might not be defined for the wrapped callable.
In the above example, the cache can be removed by recovering the original function:
- >>> get_phone_number = get_phone_number.__wrapped__ # uncached function
(由 Nick Coghlan 和 Terrence Cole 在 bpo-9567 [https://bugs.python.org/issue?@action=redirect&bpo=9567], bpo-3445 [https://bugs.python.org/issue?@action=redirect&bpo=3445] 和 bpo-8814 [https://bugs.python.org/issue?@action=redirect&bpo=8814] 中贡献。)
- 为帮助编写具有丰富比较方法的类,新增的装饰器
functools.total_ordering()
将使用现有的相等和不相等方法来填充其余的方法。
例如,提供 eq and lt 将启用 total_ordering()
来填充 le, gt 和 ge:
- @totalordering
- class Student:
- def _eq(self, other):
- return ((self.lastname.lower(), self.firstname.lower()) ==
- (other.lastname.lower(), other.firstname.lower()))
def __lt__(self, other):
return ((self.lastname.lower(), self.firstname.lower()) <
(other.lastname.lower(), other.firstname.lower()))
使用 total_ordering 装饰器时,将会自动填充其余的比较方法。
(由 Raymond Hettinger 贡献。)
- 为帮助移植 Python 2 程序,
functools.cmp_to_key()
函数可将旧式的比较函数转换为新式的 key function:
- >>> # locale-aware sort order
- >>> sorted(iterable, key=cmp_to_key(locale.strcoll))
For sorting examples and a brief sorting tutorial, see the Sorting HowTo [https://wiki.python.org/moin/HowTo/Sorting/] tutorial.
(由 Raymond Hettinger 贡献。)
itertools
- The
itertools
module has a newaccumulate()
function modeled on APL's scan operator and Numpy's accumulate function:
- >>> from itertools import accumulate
- >>> list(accumulate([8, 2, 50]))
- [8, 10, 60]
- >>> prob_dist = [0.1, 0.4, 0.2, 0.3]
- >>> list(accumulate(prob_dist)) # cumulative probability distribution
- [0.1, 0.5, 0.7, 1.0]
For an example using accumulate()
, see the examples for the random module.
(Contributed by Raymond Hettinger and incorporating design suggestions from Mark Dickinson.)
collections
- The
collections.Counter
class now has two forms of in-place subtraction, the existing -= operator for saturating subtraction [https://en.wikipedia.org/wiki/Saturation_arithmetic] and the newsubtract()
method for regular subtraction. The former is suitable for multisets [https://en.wikipedia.org/wiki/Multiset] which only have positive counts, and the latter is more suitable for use cases that allow negative counts:
- >>> from collections import Counter
- >>> tally = Counter(dogs=5, cats=3)
- >>> tally -= Counter(dogs=2, cats=8) # saturating subtraction
- >>> tally
- Counter({'dogs': 3})
- >>> tally = Counter(dogs=5, cats=3)
- >>> tally.subtract(dogs=2, cats=8) # regular subtraction
- >>> tally
- Counter({'dogs': 3, 'cats': -5})
(由 Raymond Hettinger 贡献。)
- The
collections.OrderedDict
class has a new methodmove_to_end()
which takes an existing key and moves it to either the first or last position in the ordered sequence.
The default is to move an item to the last position. This is equivalent of renewing an entry with od[k] = od.pop(k)
.
A fast move-to-end operation is useful for resequencing entries. For example, an ordered dictionary can be used to track order of access by aging entries from the oldest to the most recently accessed.
- >>> from collections import OrderedDict
- >>> d = OrderedDict.fromkeys(['a', 'b', 'X', 'd', 'e'])
- >>> list(d)
- ['a', 'b', 'X', 'd', 'e']
- >>> d.move_to_end('X')
- >>> list(d)
- ['a', 'b', 'd', 'e', 'X']
(由 Raymond Hettinger 贡献。)
- The
collections.deque
class grew two new methodscount()
andreverse()
that make them more substitutable forlist
objects:
- >>> from collections import deque
- >>> d = deque('simsalabim')
- >>> d.count('s')
- 2
- >>> d.reverse()
- >>> d
- deque(['m', 'i', 'b', 'a', 'l', 'a', 's', 'm', 'i', 's'])
(由 Raymond Hettinger 贡献。)
threading
The threading
module has a new Barrier
synchronization class for making multiple threads wait until all of them have reached a common barrier point. Barriers are useful for making sure that a task with multiple preconditions does not run until all of the predecessor tasks are complete.
Barriers can work with an arbitrary number of threads. This is a generalization of a Rendezvous [https://en.wikipedia.org/wiki/Synchronous_rendezvous] which is defined for only two threads.
Implemented as a two-phase cyclic barrier, Barrier
objects are suitable for use in loops. The separate filling and draining phases assure that all threads get released (drained) before any one of them can loop back and reenter the barrier. The barrier fully resets after each cycle.
Example of using barriers:
- from threading import Barrier, Thread
- def get_votes(site):
- ballots = conduct_election(site)
- all_polls_closed.wait() # do not count until all polls are closed
- totals = summarize(ballots)
- publish(site, totals)
- all_polls_closed = Barrier(len(sites))
- for site in sites:
- Thread(target=get_votes, args=(site,)).start()
In this example, the barrier enforces a rule that votes cannot be counted at any polling site until all polls are closed. Notice how a solution with a barrier is similar to one with threading.Thread.join()
, but the threads stay alive and continue to do work (summarizing ballots) after the barrier point is crossed.
If any of the predecessor tasks can hang or be delayed, a barrier can be created with an optional timeout parameter. Then if the timeout period elapses before all the predecessor tasks reach the barrier point, all waiting threads are released and a BrokenBarrierError
exception is raised:
- def get_votes(site):
- ballots = conduct_election(site)
- try:
- all_polls_closed.wait(timeout=midnight - time.now())
- except BrokenBarrierError:
- lockbox = seal_ballots(ballots)
- queue.put(lockbox)
- else:
- totals = summarize(ballots)
- publish(site, totals)
In this example, the barrier enforces a more robust rule. If some election sites do not finish before midnight, the barrier times-out and the ballots are sealed and deposited in a queue for later handling.
See Barrier Synchronization Patterns [https://osl.cs.illinois.edu/media/papers/karmani-2009-barrier_synchronization_pattern.pdf] for more examples of how barriers can be used in parallel computing. Also, there is a simple but thorough explanation of barriers in The Little Book of Semaphores [https://greenteapress.com/semaphores/LittleBookOfSemaphores.pdf], section 3.6.
(Contributed by Kristján Valur Jónsson with an API review by Jeffrey Yasskin in bpo-8777 [https://bugs.python.org/issue?@action=redirect&bpo=8777].)
datetime 和 time
- The
datetime
module has a new typetimezone
that implements thetzinfo
interface by returning a fixed UTC offset and timezone name. This makes it easier to create timezone-aware datetime objects:
- >>> from datetime import datetime, timezone
- >>> datetime.now(timezone.utc)
- datetime.datetime(2010, 12, 8, 21, 4, 2, 923754, tzinfo=datetime.timezone.utc)
- >>> datetime.strptime("01/01/2000 12:00 +0000", "%m/%d/%Y %H:%M %z")
- datetime.datetime(2000, 1, 1, 12, 0, tzinfo=datetime.timezone.utc)
Also,
timedelta
objects can now be multiplied byfloat
and divided byfloat
andint
objects. Andtimedelta
objects can now divide one another.The
datetime.date.strftime()
method is no longer restricted to years after 1900. The new supported year range is from 1000 to 9999 inclusive.Whenever a two-digit year is used in a time tuple, the interpretation has been governed by
time.accept2dyear
. The default isTrue
which means that for a two-digit year, the century is guessed according to the POSIX rules governing the%y
strptime format.
Starting with Py3.2, use of the century guessing heuristic will emit a DeprecationWarning
. Instead, it is recommended that time.accept2dyear
be set to False
so that large date ranges can be used without guesswork:
- >>> import time, warnings
- >>> warnings.resetwarnings() # remove the default warning filters
- >>> time.accept2dyear = True # guess whether 11 means 11 or 2011
- >>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0))
- Warning (from warnings module):
- ...
- DeprecationWarning: Century info guessed for a 2-digit year.
- 'Fri Jan 1 12:34:56 2011'
- >>> time.accept2dyear = False # use the full range of allowable dates
- >>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0))
- 'Fri Jan 1 12:34:56 11'
Several functions now have significantly expanded date ranges. When time.accept2dyear
is false, the time.asctime()
function will accept any year that fits in a C int, while the time.mktime()
and time.strftime()
functions will accept the full range supported by the corresponding operating system functions.
(由 Alexander Belopolsky 和 Victor Stinner 在 bpo-1289118 [https://bugs.python.org/issue?@action=redirect&bpo=1289118], bpo-5094 [https://bugs.python.org/issue?@action=redirect&bpo=5094], bpo-6641 [https://bugs.python.org/issue?@action=redirect&bpo=6641], bpo-2706 [https://bugs.python.org/issue?@action=redirect&bpo=2706], bpo-1777412 [https://bugs.python.org/issue?@action=redirect&bpo=1777412], bpo-8013 [https://bugs.python.org/issue?@action=redirect&bpo=8013] 和 bpo-10827 [https://bugs.python.org/issue?@action=redirect&bpo=10827] 中贡献。)
math
math
模块基于 C99 标准增加了六个新函数。
The isfinite()
function provides a reliable and fast way to detect special values. It returns True
for regular numbers and False
for Nan or Infinity:
- >>> from math import isfinite
- >>> [isfinite(x) for x in (123, 4.56, float('Nan'), float('Inf'))]
- [True, True, False, False]
The expm1()
function computes e**x-1
for small values of x without incurring the loss of precision that usually accompanies the subtraction of nearly equal quantities:
- >>> from math import expm1
- >>> expm1(0.013671875) # more accurate way to compute e**x-1 for a small x
- 0.013765762467652909
The erf()
function computes a probability integral or Gaussian error function [https://en.wikipedia.org/wiki/Error_function]. The complementary error function, erfc()
, is 1 - erf(x)
:
- >>> from math import erf, erfc, sqrt
- >>> erf(1.0/sqrt(2.0)) # portion of normal distribution within 1 standard deviation
- 0.682689492137086
- >>> erfc(1.0/sqrt(2.0)) # portion of normal distribution outside 1 standard deviation
- 0.31731050786291404
- >>> erf(1.0/sqrt(2.0)) + erfc(1.0/sqrt(2.0))
- 1.0
The gamma()
function is a continuous extension of the factorial function. See https://en.wikipedia.org/wiki/Gamma_function for details. Because the function is related to factorials, it grows large even for small values of x, so there is also a lgamma()
function for computing the natural logarithm of the gamma function:
- >>> from math import gamma, lgamma
- >>> gamma(7.0) # six factorial
- 720.0
- >>> lgamma(801.0) # log(800 factorial)
- 4551.950730698041
(由 Mark Dickinson 贡献)
abc
The abc
module now supports abstractclassmethod()
and abstractstaticmethod()
.
These tools make it possible to define an abstract base class that requires a particular classmethod()
or staticmethod()
to be implemented:
- class Temperature(metaclass=abc.ABCMeta):
- @abc.abstractclassmethod
- def from_fahrenheit(cls, t):
- ...
- @abc.abstractclassmethod
- def from_celsius(cls, t):
- ...
(Patch submitted by Daniel Urban; bpo-5867 [https://bugs.python.org/issue?@action=redirect&bpo=5867].)
io
The io.BytesIO
has a new method, getbuffer()
, which provides functionality similar to memoryview()
. It creates an editable view of the data without making a copy. The buffer's random access and support for slice notation are well-suited to in-place editing:
- >>> REC_LEN, LOC_START, LOC_LEN = 34, 7, 11
- >>> def change_location(buffer, record_number, location):
- ... start = record_number * REC_LEN + LOC_START
- ... buffer[start: start+LOC_LEN] = location
- >>> import io
- >>> byte_stream = io.BytesIO(
- ... b'G3805 storeroom Main chassis '
- ... b'X7899 shipping Reserve cog '
- ... b'L6988 receiving Primary sprocket'
- ... )
- >>> buffer = byte_stream.getbuffer()
- >>> change_location(buffer, 1, b'warehouse ')
- >>> change_location(buffer, 0, b'showroom ')
- >>> print(byte_stream.getvalue())
- b'G3805 showroom Main chassis '
- b'X7899 warehouse Reserve cog '
- b'L6988 receiving Primary sprocket'
(由 Antoine Pitrou 在 bpo-5506 [https://bugs.python.org/issue?@action=redirect&bpo=5506] 中贡献。)
reprlib
When writing a __repr__()
method for a custom container, it is easy to forget to handle the case where a member refers back to the container itself. Python's builtin objects such as list
and set
handle self-reference by displaying "…" in the recursive part of the representation string.
To help write such __repr__()
methods, the reprlib
module has a new decorator, recursive_repr()
, for detecting recursive calls to __repr__()
and substituting a placeholder string instead:
- >>> class MyList(list):
- ... @recursive_repr()
- ... def __repr__(self):
- ... return '<' + '|'.join(map(repr, self)) + '>'
- ...
- >>> m = MyList('abc')
- >>> m.append(m)
- >>> m.append('x')
- >>> print(m)
- <'a'|'b'|'c'|...|'x'>
(由 Raymond Hettinger 在 bpo-9826 [https://bugs.python.org/issue?@action=redirect&bpo=9826] 和 bpo-9840 [https://bugs.python.org/issue?@action=redirect&bpo=9840] 中贡献。)
logging
In addition to dictionary-based configuration described above, the logging
package has many other improvements.
The logging documentation has been augmented by a basic tutorial, an advanced tutorial, and a cookbook of logging recipes. These documents are the fastest way to learn about logging.
The logging.basicConfig()
set-up function gained a style argument to support three different types of string formatting. It defaults to "%" for traditional %-formatting, can be set to "{" for the new str.format()
style, or can be set to "$" for the shell-style formatting provided by string.Template
. The following three configurations are equivalent:
- >>> from logging import basicConfig
- >>> basicConfig(style='%', format="%(name)s -> %(levelname)s: %(message)s")
- >>> basicConfig(style='{', format="{name} -> {levelname} {message}")
- >>> basicConfig(style='$', format="$name -> $levelname: $message")
If no configuration is set-up before a logging event occurs, there is now a default configuration using a StreamHandler
directed to sys.stderr
for events of WARNING
level or higher. Formerly, an event occurring before a configuration was set-up would either raise an exception or silently drop the event depending on the value of logging.raiseExceptions
. The new default handler is stored in logging.lastResort
.
The use of filters has been simplified. Instead of creating a Filter
object, the predicate can be any Python callable that returns True
or False
.
There were a number of other improvements that add flexibility and simplify configuration. See the module documentation for a full listing of changes in Python 3.2.
csv
The csv
module now supports a new dialect, unix_dialect
, which applies quoting for all fields and a traditional Unix style with '\n'
as the line terminator. The registered dialect name is unix
.
The csv.DictWriter
has a new method, writeheader()
for writing-out an initial row to document the field names:
- >>> import csv, sys
- >>> w = csv.DictWriter(sys.stdout, ['name', 'dept'], dialect='unix')
- >>> w.writeheader()
- "name","dept"
- >>> w.writerows([
- ... {'name': 'tom', 'dept': 'accounting'},
- ... {'name': 'susan', 'dept': 'Salesl'}])
- "tom","accounting"
- "susan","sales"
(New dialect suggested by Jay Talbot in bpo-5975 [https://bugs.python.org/issue?@action=redirect&bpo=5975], and the new method suggested by Ed Abraham in bpo-1537721 [https://bugs.python.org/issue?@action=redirect&bpo=1537721].)
contextlib
There is a new and slightly mind-blowing tool ContextDecorator
that is helpful for creating a context manager that does double duty as a function decorator.
As a convenience, this new functionality is used by contextmanager()
so that no extra effort is needed to support both roles.
The basic idea is that both context managers and function decorators can be used for pre-action and post-action wrappers. Context managers wrap a group of statements using a with
statement, and function decorators wrap a group of statements enclosed in a function. So, occasionally there is a need to write a pre-action or post-action wrapper that can be used in either role.
For example, it is sometimes useful to wrap functions or groups of statements with a logger that can track the time of entry and time of exit. Rather than writing both a function decorator and a context manager for the task, the contextmanager()
provides both capabilities in a single definition:
- from contextlib import contextmanager
- import logging
- logging.basicConfig(level=logging.INFO)
- @contextmanager
- def track_entry_and_exit(name):
- logging.info('Entering: %s', name)
- yield
- logging.info('Exiting: %s', name)
Formerly, this would have only been usable as a context manager:
- with track_entry_and_exit('widget loader'):
- print('Some time consuming activity goes here')
- load_widget()
Now, it can be used as a decorator as well:
- @track_entry_and_exit('widget loader')
- def activity():
- print('Some time consuming activity goes here')
- load_widget()
Trying to fulfill two roles at once places some limitations on the technique. Context managers normally have the flexibility to return an argument usable by a with
statement, but there is no parallel for function decorators.
In the above example, there is not a clean way for the track_entry_and_exit context manager to return a logging instance for use in the body of enclosed statements.
(由 Michael Foord 在 bpo-9110 [https://bugs.python.org/issue?@action=redirect&bpo=9110] 中贡献。)
decimal and fractions
Mark Dickinson crafted an elegant and efficient scheme for assuring that different numeric datatypes will have the same hash value whenever their actual values are equal (bpo-8188 [https://bugs.python.org/issue?@action=redirect&bpo=8188]):
- assert hash(Fraction(3, 2)) == hash(1.5) == \
- hash(Decimal("1.5")) == hash(complex(1.5, 0))
Some of the hashing details are exposed through a new attribute, sys.hash_info
, which describes the bit width of the hash value, the prime modulus, the hash values for infinity and nan, and the multiplier used for the imaginary part of a number:
- >>> sys.hash_info
- sys.hash_info(width=64, modulus=2305843009213693951, inf=314159, nan=0, imag=1000003)
An early decision to limit the interoperability of various numeric types has been relaxed. It is still unsupported (and ill-advised) to have implicit mixing in arithmetic expressions such as Decimal('1.1') + float('1.1')
because the latter loses information in the process of constructing the binary float. However, since existing floating-point value can be converted losslessly to either a decimal or rational representation, it makes sense to add them to the constructor and to support mixed-type comparisons.
The
decimal.Decimal
constructor now acceptsfloat
objects directly so there in no longer a need to use thefrom_float()
method (bpo-8257 [https://bugs.python.org/issue?@action=redirect&bpo=8257]).Mixed type comparisons are now fully supported so that
Decimal
objects can be directly compared withfloat
andfractions.Fraction
(bpo-2531 [https://bugs.python.org/issue?@action=redirect&bpo=2531] and bpo-8188 [https://bugs.python.org/issue?@action=redirect&bpo=8188]).
Similar changes were made to fractions.Fraction
so that the from_float()
and from_decimal()
methods are no longer needed (bpo-8294 [https://bugs.python.org/issue?@action=redirect&bpo=8294]):
- >>> from decimal import Decimal
- >>> from fractions import Fraction
- >>> Decimal(1.1)
- Decimal('1.100000000000000088817841970012523233890533447265625')
- >>> Fraction(1.1)
- Fraction(2476979795053773, 2251799813685248)
Another useful change for the decimal
module is that the Context.clamp
attribute is now public. This is useful in creating contexts that correspond to the decimal interchange formats specified in IEEE 754 (see bpo-8540 [https://bugs.python.org/issue?@action=redirect&bpo=8540]).
(由 Mark Dickinson 和 Raymond Hettinger贡献。)
ftp
The ftplib.FTP
class now supports the context management protocol to unconditionally consume socket.error
exceptions and to close the FTP connection when done:
- >>> from ftplib import FTP
- >>> with FTP("ftp1.at.proftpd.org") as ftp:
- ftp.login()
- ftp.dir()
- '230 Anonymous login ok, restrictions apply.'
- dr-xr-xr-x 9 ftp ftp 154 May 6 10:43 .
- dr-xr-xr-x 9 ftp ftp 154 May 6 10:43 ..
- dr-xr-xr-x 5 ftp ftp 4096 May 6 10:43 CentOS
- dr-xr-xr-x 3 ftp ftp 18 Jul 10 2008 Fedora
其他文件型对象如 mmap.mmap
和 fileinput.input()
也有了支持自动关闭的上下文管理器:
- with fileinput.input(files=('log1.txt', 'log2.txt')) as f:
- for line in f:
- process(line)
(由 Tarek Ziadé 和 Giampaolo Rodolà 在 bpo-4972 [https://bugs.python.org/issue?@action=redirect&bpo=4972] 贡献,由 Georg Brandl 在 bpo-8046 [https://bugs.python.org/issue?@action=redirect&bpo=8046] 和 bpo-1286 [https://bugs.python.org/issue?@action=redirect&bpo=1286] 贡献。)
The FTP_TLS
class now accepts a context parameter, which is a ssl.SSLContext
object allowing bundling SSL configuration options, certificates and private keys into a single (potentially long-lived) structure.
(由 Giampaolo Rodolà 在 bpo-8806 [https://bugs.python.org/issue?@action=redirect&bpo=8806] 中贡献。)
popen
os.popen()
和 subprocess.Popen()
函数现在支持使用 with
语句来自动关闭文件描述符。
(由 Antoine Pitrou 和 Brian Curtin 在 bpo-7461 [https://bugs.python.org/issue?@action=redirect&bpo=7461] 和 bpo-10554 [https://bugs.python.org/issue?@action=redirect&bpo=10554] 中贡献。)
select
The select
module now exposes a new, constant attribute, PIPE_BUF
, which gives the minimum number of bytes which are guaranteed not to block when select.select()
says a pipe is ready for writing.
- >>> import select
- >>> select.PIPE_BUF
- 512
(在 Unix 系统上可用。 由 Sébastien Sablé 在 bpo-9862 [https://bugs.python.org/issue?@action=redirect&bpo=9862] 中提供补丁)
gzip 和 zipfile
gzip.GzipFile
now implements the io.BufferedIOBase
abstract base class (except for truncate()
). It also has a peek()
method and supports unseekable as well as zero-padded file objects.
The gzip
module also gains the compress()
and decompress()
functions for easier in-memory compression and decompression. Keep in mind that text needs to be encoded as bytes
before compressing and decompressing:
- >>> import gzip
- >>> s = 'Three shall be the number thou shalt count, '
- >>> s += 'and the number of the counting shall be three'
- >>> b = s.encode() # convert to utf-8
- >>> len(b)
- 89
- >>> c = gzip.compress(b)
- >>> len(c)
- 77
- >>> gzip.decompress(c).decode()[:42] # decompress and convert to text
- 'Three shall be the number thou shalt count'
(由 Anand B. Pillai 在 bpo-3488 [https://bugs.python.org/issue?@action=redirect&bpo=3488] 中贡献,由Antoine Pitrou, Nir Aides 和 Brian Curtin 在 bpo-9962 [https://bugs.python.org/issue?@action=redirect&bpo=9962],[bpo-1675951](https://bugs.python.org/issue?@action=redirect&bpo=1675951) [https://bugs.python.org/issue?@action=redirect&bpo=1675951] ,bpo-7471 [https://bugs.python.org/issue?@action=redirect&bpo=7471] 和 bpo-2846 [https://bugs.python.org/issue?@action=redirect&bpo=2846] 中贡献。)
Also, the zipfile.ZipExtFile
class was reworked internally to represent files stored inside an archive. The new implementation is significantly faster and can be wrapped in an io.BufferedReader
object for more speedups. It also solves an issue where interleaved calls to read and readline gave the wrong results.
(补丁由 Nir Aides 在 bpo-7610 [https://bugs.python.org/issue?@action=redirect&bpo=7610] 中提交。)
tarfile
The TarFile
class can now be used as a context manager. In addition, its add()
method has a new option, filter, that controls which files are added to the archive and allows the file metadata to be edited.
The new filter option replaces the older, less flexible exclude parameter which is now deprecated. If specified, the optional filter parameter needs to be a keyword argument. The user-supplied filter function accepts a TarInfo
object and returns an updated TarInfo
object, or if it wants the file to be excluded, the function can return None
:
- >>> import tarfile, glob
- >>> def myfilter(tarinfo):
- ... if tarinfo.isfile(): # only save real files
- ... tarinfo.uname = 'monty' # redact the user name
- ... return tarinfo
- >>> with tarfile.open(name='myarchive.tar.gz', mode='w:gz') as tf:
- ... for filename in glob.glob('*.txt'):
- ... tf.add(filename, filter=myfilter)
- ... tf.list()
- -rw-r--r-- monty/501 902 2011-01-26 17:59:11 annotations.txt
- -rw-r--r-- monty/501 123 2011-01-26 17:59:11 general_questions.txt
- -rw-r--r-- monty/501 3514 2011-01-26 17:59:11 prion.txt
- -rw-r--r-- monty/501 124 2011-01-26 17:59:11 py_todo.txt
- -rw-r--r-- monty/501 1399 2011-01-26 17:59:11 semaphore_notes.txt
(由 Tarek Ziadé 提议并由 Lars Gustäbel 在 bpo-6856 [https://bugs.python.org/issue?@action=redirect&bpo=6856] 中实现。)
hashlib
The hashlib
module has two new constant attributes listing the hashing algorithms guaranteed to be present in all implementations and those available on the current implementation:
- >>> import hashlib
- >>> hashlib.algorithms_guaranteed
- {'sha1', 'sha224', 'sha384', 'sha256', 'sha512', 'md5'}
- >>> hashlib.algorithms_available
- {'md2', 'SHA256', 'SHA512', 'dsaWithSHA', 'mdc2', 'SHA224', 'MD4', 'sha256',
- 'sha512', 'ripemd160', 'SHA1', 'MDC2', 'SHA', 'SHA384', 'MD2',
- 'ecdsa-with-SHA1','md4', 'md5', 'sha1', 'DSA-SHA', 'sha224',
- 'dsaEncryption', 'DSA', 'RIPEMD160', 'sha', 'MD5', 'sha384'}
(由 Carl Chenet 在 bpo-7418 [https://bugs.python.org/issue?@action=redirect&bpo=7418] 中建议。)
ast
The ast
module has a wonderful a general-purpose tool for safely evaluating expression strings using the Python literal syntax. The ast.literal_eval()
function serves as a secure alternative to the builtin eval()
function which is easily abused. Python 3.2 adds bytes
and set
literals to the list of supported types: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None
.
- >>> from ast import literal_eval
- >>> request = "{'req': 3, 'func': 'pow', 'args': (2, 0.5)}"
- >>> literal_eval(request)
- {'args': (2, 0.5), 'req': 3, 'func': 'pow'}
- >>> request = "os.system('do something harmful')"
- >>> literal_eval(request)
- Traceback (most recent call last): ...
- ValueError: malformed node or string: <_ast.Call object at 0x101739a10>
(由Benjamin Peterson 和 Georg Brandl 实现。)
os
Different operating systems use various encodings for filenames and environment variables. The os
module provides two new functions, fsencode()
and fsdecode()
, for encoding and decoding filenames:
- >>> import os
- >>> filename = 'Sehenswürdigkeiten'
- >>> os.fsencode(filename)
- b'Sehensw\xc3\xbcrdigkeiten'
Some operating systems allow direct access to encoded bytes in the environment. If so, the os.supports_bytes_environ
constant will be true.
For direct access to encoded environment variables (if available), use the new os.getenvb()
function or use os.environb
which is a bytes version of os.environ
.
(由 Victor Stinner 贡献。)
shutil
shutil.copytree()
函数增加了两个新选项:
ignore_dangling_symlinks: when
symlinks=False
so that the function copies a file pointed to by a symlink, not the symlink itself. This option will silence the error raised if the file doesn't exist.copy_function: is a callable that will be used to copy files.
shutil.copy2()
is used by default.
(由 Tarek Ziadé 贡献。)
In addition, the shutil
module now supports archiving operations for zipfiles, uncompressed tarfiles, gzipped tarfiles, and bzipped tarfiles. And there are functions for registering additional archiving file formats (such as xz compressed tarfiles or custom formats).
The principal functions are make_archive()
and unpack_archive()
. By default, both operate on the current directory (which can be set by os.chdir()
) and on any sub-directories. The archive filename needs to be specified with a full pathname. The archiving step is non-destructive (the original files are left unchanged).
- >>> import shutil, pprint
- >>> os.chdir('mydata') # change to the source directory
- >>> f = shutil.make_archive('varbackup/mydata',
- ... 'zip') # archive the current directory
- >>> f # show the name of archive
- 'varbackup/mydata.zip'
- >>> os.chdir('tmp') # change to an unpacking
- >>> shutil.unpack_archive('varbackup/mydata.zip') # recover the data
- >>> pprint.pprint(shutil.get_archive_formats()) # display known formats
- [('bztar', "bzip2'ed tarfile"),
- ('gztar', "gzip'ed tarfile"),
- ('tar', 'uncompressed tar file'),
- ('zip', 'ZIP file')]
- >>> shutil.register_archive_format( # register a new archive format
- ... name='xz',
- ... function=xz.compress, # callable archiving function
- ... extra_args=[('level', 8)], # arguments to the function
- ... description='xz compression'
- ... )
(由 Tarek Ziadé 贡献。)