|Version 1 (modified by jcnelson, 5 years ago) (diff)|
- Idea: the files a Syndicate client is accessing are too big to be stored locally
- The client has too little memory (e.g. thin client, smartphone, etc) and/or insufficient bandwidth to retrieve the entire file, or only a part of the file is desired AND it can be located quickly.
- CoBlitz holds chunks of files that are NOT held locally by the clients
- CoBlitz and the clients can “trade” blocks back and forth as the client needs different parts of the file.
- EXAMPLE: Giving read-only access to large files to mobile phones or thin clients, or worker nodes in a server farm.
Secure Temporary Access Filesystem
- Idea: instead of getting clever with caching otherwise non-cacheable data (e.g. video and audio streams, Flash video, Silverlight video), just leave the encoded video/audio streams as files on origin servers, and have a metadata server give certain views of the origin servers’ files to clients, depending on external contracts between clients and both types of servers.
- The user acquires an encrypted metadata snapshot of certain media URLs along with a temporary password that will be used to authenticate with the origin servers (e.g. user has pre-existing credentials with the metadata server, like a log-in or an authorized key). The metadata-given password is only good for a certain period of time, allowing for only temporary access to the origin servers. The metadata may change while the password is valid.
- The metadata server and origin servers cooperate to ensure that the password expires correctly (e.g. the metadata server, before serving a metadata snapshot to a user, informs the origin servers of the temporary password that will be used to authenticate the user and for how long).
- As long as the password embedded in the metadata is good, the client will be able to authenticate with the origin servers and stream encrypted blocks of data from the origin server (CoBlitz naturally provides the caching for millions of clients).
- EXAMPLE: VLC et. al. play the files as if they were local; Syndicate handles all of the streaming; CoBlitz handles the caching.
- EXAMPLE: Securely distribute software updates (e.g. APT/dpkg could use this).
- EXAMPLE: This can be used to deliver and record Internet-based TV and radio (or some medium where content gets generated at a steady pace), where users pay a monthy fee to the content provider for renewed metadata snapshots.
- EXAMPLE: With a specially-crafted media client, this can be used to replace NetFlix (especially if the files delivered to the user are encrypted and can be decrypted only by the player--then, this could be part of a DRM system).
- NOTE: This can be thought of as a special case of the write-local, read-global namespace filesystem described in the next use-case.
- Idea: a user’s collection of devices all run a Syndicate client, metadata server, and origin server, and they can retrieve file data from one another and use Syndicate and CoBlitz to coordinate.
- Shortly after placing a file into a device’s master copy, the user will likely want to copy the file into his/her devices shortly afterward; hence, placing CoBlitz between the devices makes this replication faster.
- Since every device is an origin server and a metadata server in addition to a client, devices can simply copy files back and forth to/from one another via CoBlitz
- Since some/all devices may be mobile, their IP addresses may change frequently without the other devices knowing. So, each client additionally generates a globally-unique file containing its IP address (or resolvable hostname) and pre-emptively caches it into CoBlitz. Then, one client can discover the location of another client by GET-ing its globally-unique address file from CoBlitz when it wants to talk to it.
- We effectively use CoBlitz here as a form of dynamic DNS, but specific to a user’s device (e.g. the address files can all be encrypted with a pre-shared key).
- A user’s address file can be named by the user’s public key to avoid collisions.
- Using each other’s address files, a user’s devices can almost always find one another even if they’re on the move. Then, the client Syndicate driver simply performs the HTTP GETs to the appropriate addresses to read a remote device’s data.
- For added security, the data streams between a user’s devices can be encrypted.
- EXAMPLE: This could serve as the backbone of a distributed, decentralized DropBox implementation. All that would be needed besides the above security would be a way to implement file revision control, to bring it up to par with DropBox’s feature list (but this is superfluous).
Write-local, read-global Namespace Filesystem
- Idea: Clients are also origin servers. A client maintains a directory in the filesystem and the files within. The directory defines the client’s “namespace”
- A metadata server periodically pulls the clients for metadata on the pieces of the filesystem they each maintain and publishes the aggregated data. The metadata server creates a global map of the clients, which clients use to locate and read data from other clients. The metadata server could be a crawler.
- CoBlitz accelerates file distribution--it’s the distributed cache for client-to-client reads.
- This is a lot like how Gopher functions
BIO5 dataset retrieval
- Scientific research generates literally terabytes of data *per day*
- Remote clients need to work on the data, but the analysis tools available to the remote clients assume that the data is stored locally.
- Syndicate provides the FS-like view to the analysis tools without requiring that the client download the whole file--CoBlitz serves the appropriate portions of the files.
- TODO: need more information on this