API
|
Access S3 as if it were a file system. |
|
Fetch (potentially multiple) paths' contents |
|
Space used by files and optionally directories within a path |
|
Is there a file at the given path |
|
List all files below path. |
|
Copy file(s) to local. |
|
Find files by glob-matching. |
|
Give details of entry at path |
|
List objects at path. |
|
Create directory entry at path |
|
Move file(s) from one location to another |
|
Return a file-like object from the filesystem |
|
Copy file(s) from local. |
|
Read a block of bytes from |
|
Delete files. |
|
Get the last |
|
Create empty file or truncate |
|
Open S3 key as a file. |
Close file |
|
|
Write buffered data to backend store. |
File information about this path |
|
|
Return data from cache, or fetch pieces as necessary |
|
Set current file location |
Current file location |
|
|
Write data to buffer. |
|
Mirror previous class, not implemented in fsspec |
- class s3fs.core.S3FileSystem(*args, **kwargs)[source]
Access S3 as if it were a file system.
This exposes a filesystem-like API (ls, cp, open, etc.) on top of S3 storage.
Provide credentials either explicitly (
key=,secret=) or depend on boto’s credential methods. See botocore documentation for more information. If no credentials are available, useanon=True.- Parameters:
anon (bool (False)) – Whether to use anonymous connection (public buckets only). If False, uses the key/secret given, or boto’s credential resolver (client_kwargs, environment, variables, config files, EC2 IAM server, in that order)
endpoint_url (string (None)) – Use this endpoint_url, if specified. Needed for connecting to non-AWS S3 buckets. Takes precedence over endpoint_url in client_kwargs.
key (string (None)) – If not anonymous, use this access key ID, if specified. Takes precedence over aws_access_key_id in client_kwargs.
secret (string (None)) – If not anonymous, use this secret access key, if specified. Takes precedence over aws_secret_access_key in client_kwargs.
token (string (None)) – If not anonymous, use this security token, if specified
use_ssl (bool (True)) – Whether to use SSL in connections to S3; may be faster without, but insecure. If
use_sslis also set inclient_kwargs, the value set inclient_kwargswill take priority.s3_additional_kwargs (dict of parameters that are used when calling s3 api) – methods. Typically used for things like “ServerSideEncryption”.
client_kwargs (dict of parameters for the botocore client)
requester_pays (bool (False)) – If RequesterPays buckets are supported.
default_block_size (int (None)) – If given, the default block size value used for
open(), if no specific value is given at all time. The built-in default is 50MB.default_fill_cache (Bool (True)) – Whether to use cache filling with open by default. Refer to
S3File.open.default_cache_type (string ("readahead")) – If given, the default cache_type value used for
open(). Set to “none” if no caching is desired. See fsspec’s documentation for other available cache_type values. Default cache_type is “readahead”.version_aware (bool (False)) – Whether to support bucket versioning. If enable this will require the user to have the necessary IAM permissions for dealing with versioned objects. Note that in the event that you only need to work with the latest version of objects in a versioned bucket, and do not need the VersionId for those objects, you should set
version_awareto False for performance reasons. When set to True, filesystem instances will use the S3 ListObjectVersions API call to list directory contents, which requires listing all historical object versions.cache_regions (bool (False)) – Whether to cache bucket regions or not. Whenever a new bucket is used, it will first find out which region it belongs and then use the client for that region.
asynchronous (bool (False)) – Whether this instance is to be used from inside coroutines.
config_kwargs (dict of parameters passed to
botocore.client.Config)kwargs (other parameters for core session.)
session (aiobotocore AioSession object to be used for all connections.) – This session will be used inplace of creating a new session inside S3FileSystem. For example: aiobotocore.session.AioSession(profile=’test_user’)
max_concurrency (int (10)) – The maximum number of concurrent transfers to use per file for multipart upload (
put()) operations. Defaults to 10. When used in conjunction withS3FileSystem.put(batch_size=...)the maximum number of simultaneous connections ismax_concurrency * batch_size. We may extend this parameter to affectpipe(),cat()andget(). Increasing this value will result in higher memory usage during multipart upload operations (bymax_concurrency * chunksizebytes per file).fixed_upload_size (bool (False)) – Use same chunk size for all parts in multipart upload (last part can be smaller). Cloudflare R2 storage requires fixed_upload_size=True for multipart uploads.
fsspec (The following parameters are passed on to)
skip_instance_cache (to control reuse of instances)
use_listings_cache (to control reuse of directory listings)
listings_expiry_time (to control reuse of directory listings)
max_paths (to control reuse of directory listings)
Examples
>>> s3 = S3FileSystem(anon=False) >>> s3.ls('my-bucket/') ['my-file.txt']
>>> with s3.open('my-bucket/my-file.txt', mode='rb') as f: ... print(f.read()) b'Hello, world!'
- cat(path, recursive=False, on_error='raise', **kwargs)
Fetch (potentially multiple) paths’ contents
- Parameters:
recursive (bool) – If True, assume the path(s) are directories, and get all the contained files
on_error ("raise", "omit", "return") – If raise, an underlying exception will be raised (converted to KeyError if the type is in self.missing_exceptions); if omit, keys with exception will simply not be included in the output; if “return”, all keys are included in the output, but the value will be bytes or an exception instance.
kwargs (passed to cat_file)
- Returns:
dict of {path (contents} if there are multiple paths)
or the path has been otherwise expanded
- cat_file(path, start=None, end=None, **kwargs)
Get the content of a file
- Parameters:
path (URL of file on this filesystems)
start (int) – Bytes limits of the read. If negative, backwards from end, like usual python slices. Either can be None for start or end of file, respectively
end (int) – Bytes limits of the read. If negative, backwards from end, like usual python slices. Either can be None for start or end of file, respectively
kwargs (passed to
open().)
- cat_ranges(paths, starts, ends, max_gap=None, on_error='return', **kwargs)
Get the contents of byte ranges from one or more files
- Parameters:
paths (list) – A list of of filepaths on this filesystems
starts (int or list) – Bytes limits of the read. If using a single int, the same value will be used to read all the specified files.
ends (int or list) – Bytes limits of the read. If using a single int, the same value will be used to read all the specified files.
- checksum(path, refresh=False)
Unique value for current version of file
If the checksum is the same from one moment to another, the contents are guaranteed to be the same. If the checksum changes, the contents might have changed.
- Parameters:
path (string/bytes) – path of file to get checksum for
refresh (bool (=False)) – if False, look in local cache for file details first
- chmod(path, acl, recursive=False, **kwargs)
Set Access Control on a bucket/key
See http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
- Parameters:
path (string) – the object to set
acl (string) – the value of ACL to apply
recursive (bool) – whether to apply the ACL to all keys below the given path too
- classmethod clear_instance_cache()
Clear the cache of filesystem instances.
Notes
Unless overridden by setting the
cachableclass attribute to False, the filesystem class stores a reference to newly created instances. This prevents Python’s normal rules around garbage collection from working, since the instances refcount will not drop to zero untilclear_instance_cacheis called.
- clear_multipart_uploads(bucket)
Remove any partial uploads in the bucket
- connect(refresh=False, kwargs={})
Establish S3 connection object.
This async method is called by any operation on an
S3FileSysteminstance. Therefresh=Trueargument is useful if new credentials have been created and the instance needs to be reestablished.connectis a blocking version ofset_session.- Parameters:
refresh (bool (False)) – If True, create a new session even if one already exists.
kwargs (dict) – Currently unused.
- Return type:
Session to be closed later with await .close()
Examples
>>> s3 = S3FileSystem(profile="<profile name>") # use in an async coroutine to assign the client object to a local variable >>> await s3.set_session() # blocking version of set_session >>> s3.connect(refresh=True)
- copy(path1, path2, recursive=False, maxdepth=None, on_error=None, **kwargs)
Copy within two locations in the filesystem
- on_error“raise”, “ignore”
If raise, any not-found exceptions will be raised; if ignore any not-found exceptions will cause the path to be skipped; defaults to raise unless recursive is true, where the default is ignore
- cp(path1, path2, **kwargs)
Alias of AbstractFileSystem.copy.
- created(path)
Return the created timestamp of a file as a datetime.datetime
- classmethod current()
Return the most recently instantiated FileSystem
If no instance has been created, then create one with defaults
- delete(path, recursive=False, maxdepth=None)
Alias of AbstractFileSystem.rm.
- disk_usage(path, total=True, maxdepth=None, **kwargs)
Alias of AbstractFileSystem.du.
- download(rpath, lpath, recursive=False, **kwargs)
Alias of AbstractFileSystem.get.
- du(path, total=True, maxdepth=None, withdirs=False, **kwargs)
Space used by files and optionally directories within a path
Directory size does not include the size of its contents.
- Parameters:
path (str)
total (bool) – Whether to sum all the file sizes
maxdepth (int or None) – Maximum number of directory levels to descend, None for unlimited.
withdirs (bool) – Whether to include directory paths in the output.
kwargs (passed to
find)
- Returns:
Dict of {path (size} if total=False, or int otherwise, where numbers)
refer to bytes used.
- end_transaction()
Finish write transaction, non-context version
- exists(path)
Is there a file at the given path
- expand_path(path, recursive=False, maxdepth=None, **kwargs)
Turn one or more globs or directories into a list of all matching paths to files or directories.
kwargs are passed to
globorfind, which may in turn callls
- find(path, maxdepth=None, withdirs=None, detail=False, prefix='', **kwargs)
List all files below path. Like posix
findcommand without conditions- Parameters:
path (str)
maxdepth (int or None) – If not None, the maximum number of levels to descend
withdirs (bool) – Whether to include directory paths in the output. This is True when used by glob, but users usually only want files.
prefix (str) – Only return files that match
^{path}/{prefix}(if there is an exact matchfilename == {path}/{prefix}, it also will be included)
- static from_dict(dct: dict[str, Any]) AbstractFileSystem
Recreate a filesystem instance from dictionary representation.
See
.to_dict()for the expected structure of the input.- Parameters:
dct (Dict[str, Any])
- Return type:
file system instance, not necessarily of this particular class.
Warning
This can import arbitrary modules (as determined by the
clskey). Make sure you haven’t installed any modules that may execute malicious code at import time.
- static from_json(blob: str) AbstractFileSystem
Recreate a filesystem instance from JSON representation.
See
.to_json()for the expected structure of the input.- Parameters:
blob (str)
- Return type:
file system instance, not necessarily of this particular class.
Warning
This can import arbitrary modules (as determined by the
clskey). Make sure you haven’t installed any modules that may execute malicious code at import time.
- property fsid
Persistent filesystem id that can be used to compare filesystems across sessions.
- get(rpath, lpath, recursive=False, callback=<fsspec.callbacks.NoOpCallback object>, maxdepth=None, **kwargs)
Copy file(s) to local.
Copies a specific file or tree of files (if recursive=True). If lpath ends with a “/”, it will be assumed to be a directory, and target files will go within. Can submit a list of paths, which may be glob-patterns and will be expanded.
Calls get_file for each source.
- get_delegated_s3pars(exp=3600)
Get temporary credentials from STS, appropriate for sending across a network. Only relevant where the key/secret were explicitly provided.
- Parameters:
exp (int) – Time in seconds that credentials are good for
- Return type:
dict of parameters
- get_file(rpath, lpath, callback=<fsspec.callbacks.NoOpCallback object>, outfile=None, **kwargs)
Copy single remote file to local
- get_mapper(root='', check=False, create=False, missing_exceptions=None)
Create key/value store based on this file-system
Makes a MutableMapping interface to the FS at the given root path. See
fsspec.mapping.FSMapfor further details.
- getxattr(path, attr_name, **kwargs)
Get an attribute from the metadata.
Examples
>>> mys3fs.getxattr('mykey', 'attribute_1') 'value_1'
- glob(path, maxdepth=None, **kwargs)
Find files by glob-matching.
Pattern matching capabilities for finding files that match the given pattern.
- Parameters:
path (str) – The glob pattern to match against
maxdepth (int or None) – Maximum depth for
'**'patterns. Applied on the first'**'found. Must be at least 1 if provided.kwargs – Additional arguments passed to
find(e.g., detail=True)
- Return type:
List of matched paths, or dict of paths and their info if detail=True
Notes
Supported patterns: - ‘*’: Matches any sequence of characters within a single directory level -
'**': Matches any number of directory levels (must be an entire path component) - ‘?’: Matches exactly one character - ‘[abc]’: Matches any character in the set - ‘[a-z]’: Matches any character in the range - ‘[!abc]’: Matches any character NOT in the setSpecial behaviors: - If the path ends with ‘/’, only folders are returned - Consecutive ‘*’ characters are compressed into a single ‘*’ - Empty brackets ‘[]’ never match anything - Negated empty brackets ‘[!]’ match any single character - Special characters in character classes are escaped properly
Limitations: -
'**'must be a complete path component (e.g.,'a/**/b', not'a**b') - No brace expansion (‘{a,b}.txt’) - No extended glob patterns (‘+(pattern)’, ‘!(pattern)’)
- head(path, size=1024)
Get the first
sizebytes from file
- info(path, **kwargs)
Give details of entry at path
Returns a single dictionary, with exactly the same information as
lswould withdetail=True.The default implementation calls ls and could be overridden by a shortcut. kwargs are passed on to
`ls().Some file systems might not be able to measure the file’s size, in which case, the returned dict will include
'size': None.- Returns:
dict with keys (name (full path in the FS), size (in bytes), type (file,)
directory, or something else) and other FS-specific keys.
- invalidate_cache(path=None)[source]
Discard any cached directory information
- Parameters:
path (string or None) – If None, clear all listings cached else listings at or under given path.
- invalidate_region_cache()
Invalidate the region cache (associated with buckets) if
cache_regionsis turned on.
- isdir(path)
Is this entry directory-like?
- isfile(path)
Is this entry file-like?
- lexists(path, **kwargs)
If there is a file at the given path (including broken links)
- listdir(path, detail=True, **kwargs)
Alias of AbstractFileSystem.ls.
- ls(path, detail=True, **kwargs)
List objects at path.
This should include subdirectories and files at that location. The difference between a file and a directory must be clear when details are requested.
The specific keys, or perhaps a FileInfo class, or similar, is TBD, but must be consistent across implementations. Must include:
full path to the entry (without protocol)
size of the entry, in bytes. If the value cannot be determined, will be
None.type of entry, “file”, “directory” or other
Additional information may be present, appropriate to the file-system, e.g., generation, checksum, etc.
May use refresh=True|False to allow use of self._ls_from_cache to check for a saved listing and avoid calling the backend. This would be common where listing may be expensive.
- Parameters:
path (str)
detail (bool) – if True, gives a list of dictionaries, where each is the same as the result of
info(path). If False, gives a list of paths (str).kwargs (may have additional backend-specific options, such as version) – information
- Returns:
List of strings if detail is False, or list of directory information
dicts if detail is True.
- make_bucket_versioned(bucket, versioned: bool = True)
Set bucket versioning status
- makedir(path, create_parents=True, **kwargs)
Alias of AbstractFileSystem.mkdir.
- makedirs(path, exist_ok=False)
Recursively make directories
Creates directory at path and any intervening required directories. Raises exception if, for instance, the path already exists but is a file.
- Parameters:
path (str) – leaf directory name
exist_ok (bool (False)) – If False, will error if the target already exists
- merge(path, filelist, **kwargs)
Create single S3 file from list of S3 files
Uses multi-part, no data is downloaded. The original files are not deleted.
- Parameters:
path (str) – The final file to produce
filelist (list of str) – The paths, in order, to assemble into the final file.
- metadata(path, refresh=False, **kwargs)
Return metadata of path.
- Parameters:
path (string/bytes) – filename to get metadata for
refresh (bool (=False)) – (ignored)
- mkdir(path, acl=False, create_parents=True, **kwargs)
Create directory entry at path
For systems that don’t have true directories, may create an for this instance only and not touch the real filesystem
- Parameters:
path (str) – location
create_parents (bool) – if True, this is equivalent to
makedirskwargs – may be permissions, etc.
- mkdirs(path, exist_ok=False)
Alias of AbstractFileSystem.makedirs.
- modified(path, version_id=None, refresh=False)[source]
Return the last modified timestamp of file at path as a datetime
- move(path1, path2, **kwargs)
Alias of AbstractFileSystem.mv.
- mv(path1, path2, recursive=False, maxdepth=None, **kwargs)
Move file(s) from one location to another
- open(path, mode='rb', block_size=None, cache_options=None, compression=None, **kwargs)
Return a file-like object from the filesystem
The resultant instance must function correctly in a context
withblock.- Parameters:
path (str) – Target file
mode (str like 'rb', 'w') – See builtin
open()Mode “x” (exclusive write) may be implemented by the backend. Even if it is, whether it is checked up front or on commit, and whether it is atomic is implementation-dependent.block_size (int) – Some indication of buffering - this is a value in bytes
cache_options (dict, optional) – Extra arguments to pass through to the cache.
compression (string or None) – If given, open file using compression codec. Can either be a compression name (a key in
fsspec.compression.compr) or “infer” to guess the compression from the filename suffix.encoding (passed on to TextIOWrapper for text mode)
errors (passed on to TextIOWrapper for text mode)
newline (passed on to TextIOWrapper for text mode)
- pipe(path, value=None, **kwargs)
Put value into path
(counterpart to
cat)- Parameters:
path (string or dict(str, bytes)) – If a string, a single remote location to put
valuebytes; if a dict, a mapping of {path: bytesvalue}.value (bytes, optional) – If using a single path, these are the bytes to put there. Ignored if
pathis a dict
- pipe_file(path, value, mode='overwrite', **kwargs)
Set the bytes of given file
- put(lpath, rpath, recursive=False, callback=<fsspec.callbacks.NoOpCallback object>, maxdepth=None, **kwargs)
Copy file(s) from local.
Copies a specific file or tree of files (if recursive=True). If rpath ends with a “/”, it will be assumed to be a directory, and target files will go within.
Calls put_file for each source.
- put_file(lpath, rpath, callback=<fsspec.callbacks.NoOpCallback object>, mode='overwrite', **kwargs)
Copy single file to remote
- put_tags(path, tags, mode='o')[source]
Set tags for given existing key
Tags are a str:str mapping that can be attached to any key, see https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/allocation-tag-restrictions.html
This is similar to, but distinct from, key metadata, which is usually set at key creation time.
- Parameters:
path (str) – Existing key to attach tags to
tags (dict str, str) – Tags to apply.
mode – One of ‘o’ or ‘m’ ‘o’: Will over-write any existing tags. ‘m’: Will merge in new tags with existing tags. Incurs two remote calls.
- read_block(fn, offset, length, delimiter=None)
Read a block of bytes from
Starting at
offsetof the file, readlengthbytes. Ifdelimiteris set then we ensure that the read starts and stops at delimiter boundaries that follow the locationsoffsetandoffset + length. Ifoffsetis zero then we start at zero. The bytestring returned WILL include the end delimiter string.If offset+length is beyond the eof, reads to eof.
- Parameters:
fn (string) – Path to filename
offset (int) – Byte offset to start read
length (int) – Number of bytes to read. If None, read to end.
delimiter (bytes (optional)) – Ensure reading starts and stops at delimiter bytestring
Examples
>>> fs.read_block('data/file.csv', 0, 13) b'Alice, 100\nBo' >>> fs.read_block('data/file.csv', 0, 13, delimiter=b'\n') b'Alice, 100\nBob, 200\n'
Use
length=Noneto read to the end of the file. >>> fs.read_block(‘data/file.csv’, 0, None, delimiter=b’n’) # doctest: +SKIP b’Alice, 100nBob, 200nCharlie, 300’See also
fsspec.utils.read_block()
- read_bytes(path, start=None, end=None, **kwargs)
Alias of AbstractFileSystem.cat_file.
- read_text(path, encoding=None, errors=None, newline=None, **kwargs)
Get the contents of the file as a string.
- Parameters:
path (str) – URL of file on this filesystems
encoding (same as open.)
errors (same as open.)
newline (same as open.)
- rename(path1, path2, **kwargs)
Alias of AbstractFileSystem.mv.
- rm(path, recursive=False, maxdepth=None)
Delete files.
- Parameters:
path (str or list of str) – File(s) to delete.
recursive (bool) – If file(s) are directories, recursively delete contents and then also remove the directory
maxdepth (int or None) – Depth to pass to walk for finding files to delete, if recursive. If None, there will be no limit and infinite recursion may be possible.
- rm_file(path)
Delete a file
- rmdir(path)
Remove a directory, if empty
- async set_session(refresh=False, kwargs={})[source]
Establish S3 connection object.
This async method is called by any operation on an
S3FileSysteminstance. Therefresh=Trueargument is useful if new credentials have been created and the instance needs to be reestablished.connectis a blocking version ofset_session.- Parameters:
refresh (bool (False)) – If True, create a new session even if one already exists.
kwargs (dict) – Currently unused.
- Return type:
Session to be closed later with await .close()
Examples
>>> s3 = S3FileSystem(profile="<profile name>") # use in an async coroutine to assign the client object to a local variable >>> await s3.set_session() # blocking version of set_session >>> s3.connect(refresh=True)
- setxattr(path, copy_kwargs=None, **kw_args)
Set metadata.
Attributes have to be of the form documented in the Metadata Reference.
- Parameters:
kw_args (key-value pairs like field="value", where the values must be) – strings. Does not alter existing fields, unless the field appears here - if the value is None, delete the field.
copy_kwargs (dict, optional) – dictionary of additional params to use for the underlying s3.copy_object.
Examples
>>> mys3file.setxattr(attribute_1='value1', attribute_2='value2') # Example for use with copy_args >>> mys3file.setxattr(copy_kwargs={'ContentType': 'application/pdf'}, ... attribute_1='value1')
- sign(path, expiration=100, **kwargs)[source]
Create a signed URL representing the given path
Some implementations allow temporary URLs to be generated, as a way of delegating credentials.
- Parameters:
path (str) – The path on the filesystem
expiration (int) – Number of seconds to enable the URL for (if supported)
- Returns:
URL – The signed URL
- Return type:
str
:raises NotImplementedError : if method is not implemented for a filesystem:
- size(path)
Size in bytes of file
- sizes(paths)
Size in bytes of each file in a list of paths
- split_path(path) tuple[str, str, str | None][source]
Normalise S3 path string into bucket and key.
- Parameters:
path (string) – Input path, like s3://mybucket/path/to/file
Examples
>>> split_path("s3://mybucket/path/to/file") ['mybucket', 'path/to/file', None]
>>> split_path("s3://mybucket/path/to/versioned_file?versionId=some_version_id") ['mybucket', 'path/to/versioned_file', 'some_version_id']
- start_transaction()
Begin write transaction for deferring files, non-context version
- stat(path, **kwargs)
Alias of AbstractFileSystem.info.
- tail(path, size=1024)
Get the last
sizebytes from file
- to_dict(*, include_password: bool = True) dict[str, Any]
JSON-serializable dictionary representation of this filesystem instance.
- Parameters:
include_password (bool, default True) – Whether to include the password (if any) in the output.
- Returns:
Dictionary with keys
cls(the python location of this class),protocol (text name of this class’s protocol, first one in case of
multiple),
args(positional args, usually empty), and all otherkeyword arguments as their own keys.
Warning
Serialized filesystems may contain sensitive information which have been passed to the constructor, such as passwords and tokens. Make sure you store and send them in a secure environment!
- to_json(*, include_password: bool = True) str
JSON representation of this filesystem instance.
- Parameters:
include_password (bool, default True) – Whether to include the password (if any) in the output.
- Returns:
JSON string with keys
cls(the python location of this class),protocol (text name of this class’s protocol, first one in case of
multiple),
args(positional args, usually empty), and all otherkeyword arguments as their own keys.
Warning
Serialized filesystems may contain sensitive information which have been passed to the constructor, such as passwords and tokens. Make sure you store and send them in a secure environment!
- touch(path, truncate=True, data=None, **kwargs)
Create empty file or truncate
- property transaction
A context within which files are committed together upon exit
Requires the file class to implement .commit() and .discard() for the normal and exception cases.
- transaction_type
alias of
Transaction
- tree(path: str = '/', recursion_limit: int = 2, max_display: int = 25, display_size: bool = False, prefix: str = '', is_last: bool = True, first: bool = True, indent_size: int = 4) str
Return a tree-like structure of the filesystem starting from the given path as a string.
- Parameters:
path (Root path to start traversal from)
recursion_limit (Maximum depth of directory traversal)
max_display (Maximum number of items to display per directory)
display_size (Whether to display file sizes)
prefix (Current line prefix for visual tree structure)
is_last (Whether current item is last in its level)
first (Whether this is the first call (displays root path))
indent_size (Number of spaces by indent)
- Returns:
str
- Return type:
A string representing the tree structure.
Example
>>> from fsspec import filesystem
>>> fs = filesystem('ftp', host='test.rebex.net', user='demo', password='password') >>> tree = fs.tree(display_size=True, recursion_limit=3, indent_size=8, max_display=10) >>> print(tree)
- ukey(path)
Hash of file properties, to tell if it has changed
- unstrip_protocol(name: str) str
Format FS-specific path to generic, including protocol
- upload(lpath, rpath, recursive=False, **kwargs)
Alias of AbstractFileSystem.put.
- url(path, expires=3600, client_method='get_object', **kwargs)
Generate presigned URL to access path by HTTP
- Parameters:
path (string) – the key path we are interested in
expires (int) – the number of seconds this signature will be good for.
- walk(path, maxdepth=None, topdown=True, on_error='omit', **kwargs)
Return all files under the given path.
List all files, recursing into subdirectories; output is iterator-style, like
os.walk(). For a simple list of files,find()is available.When topdown is True, the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only recurse into the subdirectories whose names remain in dirnames; this can be used to prune the search, impose a specific order of visiting, or even to inform walk() about directories the caller creates or renames before it resumes walk() again. Modifying dirnames when topdown is False has no effect. (see os.walk)
Note that the “files” outputted will include anything that is not a directory, such as links.
- Parameters:
path (str) – Root to recurse into
maxdepth (int) – Maximum recursion depth. None means limitless, but not recommended on link-based file-systems.
topdown (bool (True)) – Whether to walk the directory tree from the top downwards or from the bottom upwards.
on_error ("omit", "raise", a callable) – if omit (default), path with exception will simply be empty; If raise, an underlying exception will be raised; if callable, it will be called with a single OSError instance as argument
kwargs (passed to
ls)
- write_bytes(path, value, **kwargs)
Alias of AbstractFileSystem.pipe_file.
- write_text(path, value, encoding=None, errors=None, newline=None, **kwargs)
Write the text to the given file.
An existing file will be overwritten.
- Parameters:
path (str) – URL of file on this filesystems
value (str) – Text to write.
encoding (same as open.)
errors (same as open.)
newline (same as open.)
- class s3fs.core.S3File(s3, path, mode='rb', block_size=52428800, acl=False, version_id=None, fill_cache=True, s3_additional_kwargs=None, autocommit=True, cache_type='readahead', requester_pays=False, cache_options=None, size=None)[source]
Open S3 key as a file. Data is only loaded and cached on demand.
- Parameters:
s3 (S3FileSystem) – botocore connection
path (string) – S3 bucket/key to access
mode (str) – One of ‘rb’, ‘wb’, ‘ab’. These have the same meaning as they do for the built-in open function.
block_size (int) – read-ahead size for finding delimiters
fill_cache (bool) – If seeking to new a part of the file beyond the current buffer, with this True, the buffer will be filled between the sections to best support random access. When reading only a few specific chunks out of a file, performance may be better if False.
acl (str) – Canned ACL to apply
version_id (str) – Optional version to read the file at. If not specified this will default to the current version of the object. This is only used for reading.
requester_pays (bool (False)) – If RequesterPays buckets are supported.
Examples
>>> s3 = S3FileSystem() >>> with s3.open('my-bucket/my-file.txt', mode='rb') as f: ... ...
See also
S3FileSystem.openused to create
S3Fileobjects
- close()
Close file
Finalizes writes, discards cache
- fileno()
Return underlying file descriptor if one exists.
Raise OSError if the IO object does not use a file descriptor.
- flush(force=False)
Write buffered data to backend store.
Writes the current buffer, if it is larger than the block-size, or if the file is being closed.
- Parameters:
force (bool) – When closing, write the last block even if it is smaller than blocks are allowed to be. Disallows further writing to this file.
- getxattr(xattr_name, **kwargs)[source]
Get an attribute from the metadata. See
getxattr().Examples
>>> mys3file.getxattr('attribute_1') 'value_1'
- info()
File information about this path
- isatty()
Return whether this is an ‘interactive’ stream.
Return False if it can’t be determined.
- metadata(refresh=False, **kwargs)[source]
Return metadata of file. See
metadata().Metadata is cached unless refresh=True.
- read(length=-1)
Return data from cache, or fetch pieces as necessary
- Parameters:
length (int (-1)) – Number of bytes to read; if <0, all remaining bytes.
- readable()
Whether opened for reading
- readinto(b)
mirrors builtin file’s readinto method
https://docs.python.org/3/library/io.html#io.RawIOBase.readinto
- readline()
Read until and including the first occurrence of newline character
Note that, because of character encoding, this is not necessarily a true line ending.
- readlines()
Return all data, split by the newline character, including the newline character
- readuntil(char=b'\n', blocks=None)
Return data between current position and first occurrence of char
char is included in the output, except if the end of the tile is encountered first.
- Parameters:
char (bytes) – Thing to find
blocks (None or int) – How much to read in each go. Defaults to file blocksize - which may mean a new read on every call.
- seek(loc, whence=0)
Set current file location
- Parameters:
loc (int) – byte location
whence ({0, 1, 2}) – from start of file, current location or end of file, resp.
- seekable()
Whether is seekable (only in read mode)
- setxattr(copy_kwargs=None, **kwargs)[source]
Set metadata. See
setxattr().Examples
>>> mys3file.setxattr(attribute_1='value1', attribute_2='value2')
- tell()
Current file location
- truncate(size=None, /)
Truncate file to size bytes.
File pointer is left unchanged. Size defaults to the current IO position as reported by tell(). Return the new size.
- writable()
Whether opened for writing
- write(data)
Write data to buffer.
Buffer only sent on flush() or if buffer is greater than or equal to blocksize.
- Parameters:
data (bytes) – Set of bytes to be written.
- writelines(lines, /)
Write a list of lines to stream.
Line separators are not added, so it is usual for each of the lines provided to have a line separator at the end.
- s3fs.mapping.S3Map(root, s3, check=False, create=False)[source]
Mirror previous class, not implemented in fsspec