Sharedfiles-and-json.ipynbOpen in CoCalc
JSON files and you
import sys
sys.version

'3.6.5 | packaged by conda-forge | (default, Apr 6 2018, 13:39:56) \n[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)]'
! echo "hello world" > input.txt


To access a file in python, we are going to use the open builtin and close method.

If interested, what is the difference between a builtin and a method?

filename = 'input.txt'
f = open(filename)
print(type(f))
f.close()

<class '_io.TextIOWrapper'> hello world

What do you expect to happen if you read after closing?

It is a good idea to close your files as soon as you are done with them. It is easy to 'leak files' and crash, or accidentally write to a file that you should have closed.

with open(filename) as f:

print(f.closed)

hello world True

You should basically always use a context manager. It means you have one less thing to worry about. It shows when your file is open. It can also be used to open basically any external resource.

What do you expect to happen if you read twice?

with open(filename) as f:


hello world

Why might it a bad idea to read the same data twice from a file?

What do you expect to happen if you read from a closed file?

with open(filename) as f:
pass


--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-19-787db92aa7fc> in <module>() 2 pass 3 ----> 4 f.read() ValueError: I/O operation on closed file. 

What do you expect to happen if you open a file twice?

with open(filename) as f:
with open(filename) as f2:

hello world hello world

But don't do this if you are writing to the file.

What do you expect to happen if an error is raised inside the context block?

with open(filename) as f:
f.close()
print('Hi mom!')

--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-54657245ae0e> in <module>() 1 with open(filename) as f: 2 f.close() ----> 3 f.read() 4 print('Hi mom!') ValueError: I/O operation on closed file. 
# When opening a file, we have to specify a mode
# writable 'w'
# both 'r+'
# write at the end of the file 'a'
# the default is read only
with open(filename) as f:
f.write("I've got a lovely bunch of coconuts")

--------------------------------------------------------------------------- UnsupportedOperation Traceback (most recent call last) <ipython-input-26-608e991551fe> in <module>() 1 with open(filename) as f: ----> 2 f.write("I've got a lovely bunch of coconuts") UnsupportedOperation: not writable 

Why don't these modes make any sense?

help(open)

Help on built-in function open in module io: open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None) Open file and return a stream. Raise IOError upon failure. file is either a text or byte string giving the name (and the path if the file isn't in the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped. (If a file descriptor is given, it is closed when the returned I/O object is closed, unless closefd is set to False.) mode is an optional string that specifies the mode in which the file is opened. It defaults to 'r' which means open for reading in text mode. Other common values are 'w' for writing (truncating the file if it already exists), 'x' for creating and writing to a new file, and 'a' for appending (which on some Unix systems, means that all writes append to the end of the file regardless of the current seek position). In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding. (For reading and writing raw bytes use binary mode and leave encoding unspecified.) The available modes are: ========= =============================================================== Character Meaning --------- --------------------------------------------------------------- 'r' open for reading (default) 'w' open for writing, truncating the file first 'x' create a new file and open it for writing 'a' open for writing, appending to the end of the file if it exists 'b' binary mode 't' text mode (default) '+' open a disk file for updating (reading and writing) 'U' universal newline mode (deprecated) ========= =============================================================== The default mode is 'rt' (open for reading text). For binary random access, the mode 'w+b' opens and truncates the file to 0 bytes, while 'r+b' opens the file without truncation. The 'x' mode implies 'w' and raises an FileExistsError if the file already exists. Python distinguishes between files opened in binary and text modes, even when the underlying operating system doesn't. Files opened in binary mode (appending 'b' to the mode argument) return contents as bytes objects without any decoding. In text mode (the default, or when 't' is appended to the mode argument), the contents of the file are returned as strings, the bytes having been first decoded using a platform-dependent encoding or using the specified encoding if given. 'U' mode is deprecated and will raise an exception in future versions of Python. It has no effect in Python 3. Use newline to control universal newlines mode. buffering is an optional integer used to set the buffering policy. Pass 0 to switch buffering off (only allowed in binary mode), 1 to select line buffering (only usable in text mode), and an integer > 1 to indicate the size of a fixed-size chunk buffer. When no buffering argument is given, the default buffering policy works as follows: * Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying device's "block size" and falling back on io.DEFAULT_BUFFER_SIZE. On many systems, the buffer will typically be 4096 or 8192 bytes long. * "Interactive" text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above for binary files. encoding is the name of the encoding used to decode or encode the file. This should only be used in text mode. The default encoding is platform dependent, but any encoding supported by Python can be passed. See the codecs module for the list of supported encodings. errors is an optional string that specifies how encoding errors are to be handled---this argument should not be used in binary mode. Pass 'strict' to raise a ValueError exception if there is an encoding error (the default of None has the same effect), or pass 'ignore' to ignore errors. (Note that ignoring encoding errors can lead to data loss.) See the documentation for codecs.register or run 'help(codecs.Codec)' for a list of the permitted encoding error strings. newline controls how universal newlines works (it only applies to text mode). It can be None, '', '\n', '\r', and '\r\n'. It works as follows: * On input, if newline is None, universal newlines mode is enabled. Lines in the input can end in '\n', '\r', or '\r\n', and these are translated into '\n' before being returned to the caller. If it is '', universal newline mode is enabled, but line endings are returned to the caller untranslated. If it has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated. * On output, if newline is None, any '\n' characters written are translated to the system default line separator, os.linesep. If newline is '' or '\n', no translation takes place. If newline is any of the other legal values, any '\n' characters written are translated to the given string. If closefd is False, the underlying file descriptor will be kept open when the file is closed. This does not work when a file name is given and must be True in that case. A custom opener can be used by passing a callable as *opener*. The underlying file descriptor for the file object is then obtained by calling *opener* with (*file*, *flags*). *opener* must return an open file descriptor (passing os.open as *opener* results in functionality similar to passing None). open() returns a file object whose type depends on the mode, and through which the standard file operations such as reading and writing are performed. When open() is used to open a file in a text mode ('w', 'r', 'wt', 'rt', etc.), it returns a TextIOWrapper. When used to open a file in a binary mode, the returned class varies: in read binary mode, it returns a BufferedReader; in write binary and append binary modes, it returns a BufferedWriter, and in read/write mode, it returns a BufferedRandom. It is also possible to use a string or bytearray as a file for both reading and writing. For strings StringIO can be used like a file opened in a text mode, and for bytes a BytesIO can be used like a file opened in a binary mode.

Files have a line iterator!

coconuts = 'coconuts.txt'
with open(coconuts, 'w') as f:
f.write("I've got a lovely bunch of coconuts")
f.write("There they are, all standing in a row")
f.write("Give them a twist a flick of the wrist")
f.write("That's what the showman said")

with open(coconuts, 'r') as f:
for line in f:
print(line)

I've got a lovely bunch of coconutsThere they are, all standing in a rowBig ones, small ones, some as big as your headGive them a twist a flick of the wristThat's what the showman said

.read() and the line iterator are the workhorses of file access. There are lots more you can do, but you will have to poke around in the documentation to find the specific methods you need.

Who knows what JSON is?

Who feels like they can explain it?

JSON is a text based serialization format. It represents constructs that are common between languages in a readable form. It has

• string
• number
• dictionary (called an object)
• list (called an array)
• boolean
• None (called null)
import json
json_string = '{"message": "hello world"}'
print(d)

{'message': 'hello world'}
json.dumps(d)

'{"message": "hello world"}'

JSON is universal, simple and powerful. It should probably be your goto format for representing data.

Things to watch out for:

• don't represent your binary data as a string
• doesn't have a comment, which makes it an innapropriate format for configuration
• can represent things in json that can not be represented in the language.
json.loads('{"number": 1.6000000000000000000001}')

{'number': 1.6}

If you are on python 3, you should be representing files as pathlib.Path. It is a tremendous amount of work to represent a path across operating systems, pathlib does that for you. There are a surprising number of gotchas in dealing with paths (get the suffix of file.txt.backup), pathlib anticipates those. Pathlib is fun to work with.

from pathlib import Path
(Path('~/folderA')/'folderB'/'folderC').expanduser()

PosixPath('/home/user/folderA/folderB/folderC')