Page Menu
Home
Phorge
Search
Configure Global Search
Log In
Files
F2891568
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Award Token
Flag For Later
Advanced/Developer...
View Handle
View Hovercard
Size
19 KB
Referenced Files
None
Subscribers
None
View Options
diff --git a/src/docs/user/configuration/configuring_backups.diviner b/src/docs/user/configuration/configuring_backups.diviner
index 9c283c5722..0e851d01ac 100644
--- a/src/docs/user/configuration/configuring_backups.diviner
+++ b/src/docs/user/configuration/configuring_backups.diviner
@@ -1,171 +1,171 @@
@title Configuring Backups and Performing Migrations
@group config
Advice for backing up Phabricator, or migrating from one machine to another.
Overview
========
Phabricator does not currently have a comprehensive backup system, but creating
backups is not particularly difficult and Phabricator does have a few basic
tools which can help you set up a reasonable process. In particular, the things
which needs to be backed up are:
- the MySQL databases;
- hosted repositories;
- uploaded files; and
- your Phabricator configuration files.
This document discusses approaches for backing up this data.
If you are migrating from one machine to another, you can generally follow the
same steps you would if you were creating a backup and then restoring it, you
will just backup the old machine and then restore the data onto the new
machine.
WARNING: You need to restart Phabricator after restoring data.
Restarting Phabricator after performing a restore makes sure that caches are
flushed properly. For complete instructions, see
@{article:Restarting Phabricator}.
Backup: MySQL Databases
=======================
Most of Phabricator's data is stored in MySQL, and it's the most important thing
to back up. You can run `bin/storage dump` to get a dump of all the MySQL
databases. This is a convenience script which just runs a normal `mysqldump`,
but will only dump databases Phabricator owns.
Since most of this data is compressible, it may be helpful to run it through
gzip prior to storage. For example:
- phabricator/ $ ./bin/storage dump | gzip > backup.sql.gz
+ phabricator/ $ ./bin/storage dump --compress --output backup.sql.gz
Then store the backup somewhere safe, like in a box buried under an old tree
stump. No one will ever think to look for it there.
Restore: MySQL
==============
To restore a MySQL dump, just pipe it to `mysql` on a clean host. (You may need
to uncompress it first, if you compressed it prior to storage.)
$ gunzip -c backup.sql.gz | mysql
Backup: Hosted Repositories
===========================
If you host repositories in Phabricator, you should back them up. You can use
`bin/repository list-paths` to show the local paths on disk for each
repository. To back them up, copy them elsewhere.
You can also just clone them and keep the clones up to date, or use
{nav Add Mirror} to have them mirror somewhere automatically.
Restore: Hosted Repositories
============================
To restore hosted repositories, copy them back into the correct locations
as shown by `bin/repository list-paths`.
Backup: Uploaded Files
======================
Uploaded files may be stored in several different locations. The backup
procedure depends on where files are stored:
**Default / MySQL**: Under the default configuration, uploaded files are stored
in MySQL, so the MySQL backup will include all files. In this case, you don't
need to do any additional work.
**Amazon S3**: If you use Amazon S3, redundancy and backups are built in to the
service. This is probably sufficient for most installs. If you trust Amazon with
your data //except not really//, you can backup your S3 bucket outside of
Phabricator.
**Local Disk**: If you use the local disk storage engine, you'll need to back up
files manually. You can do this by creating a copy of the root directory where
you told Phabricator to put files (the `storage.local-disk.path` configuration
setting).
For more information about configuring how files are stored, see
@{article:Configuring File Storage}.
Restore: Uploaded Files
=======================
To restore a backup of local disk storage, just copy the backup into place.
Backup: Configuration Files
===========================
You should also backup your configuration files, and any scripts you use to
deploy or administrate Phabricator (like a customized upgrade script). The best
way to do this is to check them into a private repository somewhere and just use
whatever backup process you already have in place for repositories. Just copying
them somewhere will work fine too, of course.
In particular, you should backup this configuration file which Phabricator
creates:
phabricator/conf/local/local.json
This file contains all of the configuration settings that have been adjusted
by using `bin/config set <key> <value>`.
Restore: Configuration Files
============================
To restore configuration files, just copy them into the right locations. Copy
your backup of `local.json` to `phabricator/conf/local/local.json`.
Security
========
MySQL dumps have no builtin encryption and most data in Phabricator is stored in
a raw, accessible form, so giving a user access to backups is a lot like giving
them shell access to the machine Phabricator runs on. In particular, a user who
has the backups can:
- read data that policies do not permit them to see;
- read email addresses and object secret keys; and
- read other users' session and conduit tokens and impersonate them.
Some of this information is durable, so disclosure of even a very old backup may
present a risk. If you restrict access to the Phabricator host or database, you
should also restrict access to the backups.
Skipping Indexes
================
By default, `bin/storage dump` does not dump all of the data in the database:
it skips some caches which can be rebuilt automatically and do not need to be
backed up. Some of these caches are very large, so the size of the dump may
be significantly smaller than the size of the databases.
If you have a large amount of data, you can specify `--no-indexes` when taking
a database dump to skip additional tables which contain search indexes. This
will reduce the size (and increase the speed) of the backup. This is an
advanced option which most installs will not benefit from.
This index data can be rebuilt after a restore, but will not be rebuilt
automatically. If you choose to use this flag, you must manually rebuild
indexes after a restore (for details, see ((reindex))).
Next Steps
==========
Continue by:
- returning to the @{article:Configuration Guide}.
diff --git a/src/infrastructure/storage/management/workflow/PhabricatorStorageManagementDumpWorkflow.php b/src/infrastructure/storage/management/workflow/PhabricatorStorageManagementDumpWorkflow.php
index d5f1da9ecf..c3b9a32327 100644
--- a/src/infrastructure/storage/management/workflow/PhabricatorStorageManagementDumpWorkflow.php
+++ b/src/infrastructure/storage/management/workflow/PhabricatorStorageManagementDumpWorkflow.php
@@ -1,360 +1,363 @@
<?php
final class PhabricatorStorageManagementDumpWorkflow
extends PhabricatorStorageManagementWorkflow {
protected function didConstruct() {
$this
->setName('dump')
->setExamples('**dump** [__options__]')
->setSynopsis(pht('Dump all data in storage to stdout.'))
->setArguments(
array(
array(
'name' => 'for-replica',
'help' => pht(
'Add __--master-data__ to the __mysqldump__ command, '.
'generating a CHANGE MASTER statement in the output.'),
),
array(
'name' => 'output',
'param' => 'file',
'help' => pht(
'Write output directly to disk. This handles errors better '.
'than using pipes. Use with __--compress__ to gzip the '.
'output.'),
),
array(
'name' => 'compress',
'help' => pht(
'With __--output__, write a compressed file to disk instead '.
'of a plaintext file.'),
),
array(
'name' => 'no-indexes',
'help' => pht(
'Do not dump data in rebuildable index tables. This means '.
'backups are smaller and faster, but you will need to manually '.
'rebuild indexes after performing a restore.'),
),
array(
'name' => 'overwrite',
'help' => pht(
'With __--output__, overwrite the output file if it already '.
'exists.'),
),
));
}
protected function isReadOnlyWorkflow() {
return true;
}
public function didExecute(PhutilArgumentParser $args) {
+ $output_file = $args->getArg('output');
+ $is_compress = $args->getArg('compress');
+ $is_overwrite = $args->getArg('overwrite');
+
+ if ($is_compress) {
+ if ($output_file === null) {
+ throw new PhutilArgumentUsageException(
+ pht(
+ 'The "--compress" flag can only be used alongside "--output".'));
+ }
+
+ if (!function_exists('gzopen')) {
+ throw new PhutilArgumentUsageException(
+ pht(
+ 'The "--compress" flag requires the PHP "zlib" extension, but '.
+ 'that extension is not available. Install the extension or '.
+ 'omit the "--compress" option.'));
+ }
+ }
+
+ if ($is_overwrite) {
+ if ($output_file === null) {
+ throw new PhutilArgumentUsageException(
+ pht(
+ 'The "--overwrite" flag can only be used alongside "--output".'));
+ }
+ }
+
+ if ($output_file !== null) {
+ if (Filesystem::pathExists($output_file)) {
+ if (!$is_overwrite) {
+ throw new PhutilArgumentUsageException(
+ pht(
+ 'Output file "%s" already exists. Use "--overwrite" '.
+ 'to overwrite.',
+ $output_file));
+ }
+ }
+ }
+
$api = $this->getSingleAPI();
$patches = $this->getPatches();
- $console = PhutilConsole::getConsole();
-
$with_indexes = !$args->getArg('no-indexes');
$applied = $api->getAppliedPatches();
if ($applied === null) {
- $namespace = $api->getNamespace();
- $console->writeErr(
+ throw new PhutilArgumentUsageException(
pht(
- '**Storage Not Initialized**: There is no database storage '.
- 'initialized in this storage namespace ("%s"). Use '.
- '**%s** to initialize storage.',
- $namespace,
- './bin/storage upgrade'));
- return 1;
+ 'There is no database storage initialized in the current storage '.
+ 'namespace ("%s"). Use "bin/storage upgrade" to initialize '.
+ 'storage or use "--namespace" to choose a different namespace.',
+ $api->getNamespace()));
}
$ref = $api->getRef();
$ref_key = $ref->getRefKey();
$schemata_query = id(new PhabricatorConfigSchemaQuery())
->setAPIs(array($api))
->setRefs(array($ref));
$actual_map = $schemata_query->loadActualSchemata();
$expect_map = $schemata_query->loadExpectedSchemata();
$schemata = $actual_map[$ref_key];
$expect = $expect_map[$ref_key];
$targets = array();
foreach ($schemata->getDatabases() as $database_name => $database) {
$expect_database = $expect->getDatabase($database_name);
foreach ($database->getTables() as $table_name => $table) {
// NOTE: It's possible for us to find tables in these database which
// we don't expect to be there. For example, an older version of
// Phabricator may have had a table that was later dropped. We assume
// these are data tables and always dump them, erring on the side of
// caution.
$persistence = PhabricatorConfigTableSchema::PERSISTENCE_DATA;
if ($expect_database) {
$expect_table = $expect_database->getTable($table_name);
if ($expect_table) {
$persistence = $expect_table->getPersistenceType();
}
}
switch ($persistence) {
case PhabricatorConfigTableSchema::PERSISTENCE_CACHE:
// When dumping tables, leave the data in cache tables in the
// database. This will be automatically rebuild after the data
// is restored and does not need to be persisted in backups.
$with_data = false;
break;
case PhabricatorConfigTableSchema::PERSISTENCE_INDEX:
// When dumping tables, leave index data behind of the caller
// specified "--no-indexes". These tables can be rebuilt manually
// from other tables, but do not rebuild automatically.
$with_data = $with_indexes;
break;
case PhabricatorConfigTableSchema::PERSISTENCE_DATA:
default:
$with_data = true;
break;
}
$targets[] = array(
'database' => $database_name,
'table' => $table_name,
'data' => $with_data,
);
}
}
list($host, $port) = $this->getBareHostAndPort($api->getHost());
$has_password = false;
$password = $api->getPassword();
if ($password) {
if (strlen($password->openEnvelope())) {
$has_password = true;
}
}
- $output_file = $args->getArg('output');
- $is_compress = $args->getArg('compress');
- $is_overwrite = $args->getArg('overwrite');
-
- if ($is_compress) {
- if ($output_file === null) {
- throw new PhutilArgumentUsageException(
- pht(
- 'The "--compress" flag can only be used alongside "--output".'));
- }
- }
-
- if ($is_overwrite) {
- if ($output_file === null) {
- throw new PhutilArgumentUsageException(
- pht(
- 'The "--overwrite" flag can only be used alongside "--output".'));
- }
- }
-
- if ($output_file !== null) {
- if (Filesystem::pathExists($output_file)) {
- if (!$is_overwrite) {
- throw new PhutilArgumentUsageException(
- pht(
- 'Output file "%s" already exists. Use "--overwrite" '.
- 'to overwrite.',
- $output_file));
- }
- }
- }
-
$argv = array();
$argv[] = '--hex-blob';
$argv[] = '--single-transaction';
$argv[] = '--default-character-set=utf8';
if ($args->getArg('for-replica')) {
$argv[] = '--master-data';
}
$argv[] = '-u';
$argv[] = $api->getUser();
$argv[] = '-h';
$argv[] = $host;
// MySQL's default "max_allowed_packet" setting is fairly conservative
// (16MB). If we try to dump a row which is larger than this limit, the
// dump will fail.
// We encourage users to increase this limit during setup, but modifying
// the "[mysqld]" section of the configuration file (instead of
// "[mysqldump]" section) won't apply to "mysqldump" and we can not easily
// detect what the "mysqldump" setting is.
// Since no user would ever reasonably want a dump to fail because a row
// was too large, just manually force this setting to the largest supported
// value.
$argv[] = '--max-allowed-packet';
$argv[] = '1G';
if ($port) {
$argv[] = '--port';
$argv[] = $port;
}
$commands = array();
foreach ($targets as $target) {
$target_argv = $argv;
if (!$target['data']) {
$target_argv[] = '--no-data';
}
if ($has_password) {
$command = csprintf(
'mysqldump -p%P %Ls -- %R %R',
$password,
$target_argv,
$target['database'],
$target['table']);
} else {
$command = csprintf(
'mysqldump %Ls -- %R %R',
$target_argv,
$target['database'],
$target['table']);
}
$commands[] = array(
'command' => $command,
'database' => $target['database'],
);
}
// Decrease the CPU priority of this process so it doesn't contend with
// other more important things.
if (function_exists('proc_nice')) {
proc_nice(19);
}
// If we are writing to a file, stream the command output to disk. This
// mode makes sure the whole command fails if there's an error (commonly,
// a full disk). See T6996 for discussion.
if ($output_file === null) {
$file = null;
} else if ($is_compress) {
$file = gzopen($output_file, 'wb1');
} else {
$file = fopen($output_file, 'wb');
}
if (($output_file !== null) && !$file) {
throw new Exception(
pht(
'Failed to open file "%s" for writing.',
$file));
}
$created = array();
try {
foreach ($commands as $spec) {
// Because we're dumping database-by-database, we need to generate our
// own CREATE DATABASE and USE statements.
$database = $spec['database'];
$preamble = array();
if (!isset($created[$database])) {
$preamble[] =
"CREATE DATABASE /*!32312 IF NOT EXISTS*/ `{$database}` ".
"/*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_bin */;\n";
$created[$database] = true;
}
$preamble[] = "USE `{$database}`;\n";
$preamble = implode('', $preamble);
$this->writeData($preamble, $file, $is_compress, $output_file);
$future = new ExecFuture('%C', $spec['command']);
$iterator = id(new FutureIterator(array($future)))
->setUpdateInterval(0.100);
foreach ($iterator as $ready) {
list($stdout, $stderr) = $future->read();
$future->discardBuffers();
if (strlen($stderr)) {
fwrite(STDERR, $stderr);
}
$this->writeData($stdout, $file, $is_compress, $output_file);
if ($ready !== null) {
$ready->resolvex();
}
}
}
if (!$file) {
$ok = true;
} else if ($is_compress) {
$ok = gzclose($file);
} else {
$ok = fclose($file);
}
if ($ok !== true) {
throw new Exception(
pht(
'Failed to close file "%s".',
$output_file));
}
} catch (Exception $ex) {
// If we might have written a partial file to disk, try to remove it so
// we don't leave any confusing artifacts laying around.
try {
if ($file !== null) {
Filesystem::remove($output_file);
}
} catch (Exception $ex) {
// Ignore any errors we hit.
}
throw $ex;
}
return 0;
}
private function writeData($data, $file, $is_compress, $output_file) {
if (!strlen($data)) {
return;
}
if (!$file) {
$ok = fwrite(STDOUT, $data);
} else if ($is_compress) {
$ok = gzwrite($file, $data);
} else {
$ok = fwrite($file, $data);
}
if ($ok !== strlen($data)) {
throw new Exception(
pht(
'Failed to write %d byte(s) to file "%s".',
new PhutilNumber(strlen($data)),
$output_file));
}
}
}
File Metadata
Details
Attached
Mime Type
text/x-diff
Expires
Sun, Jan 19, 15:24 (3 w, 1 d ago)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
1125956
Default Alt Text
(19 KB)
Attached To
Mode
rP Phorge
Attached
Detach File
Event Timeline
Log In to Comment