backreference.org Report : Visit Site


  • Ranking Alexa Global: # 723,560,Alexa Ranking in United States is # 348,387

    Server:nginx...

    The main IP address: 109.74.198.107,Your server United Kingdom,London ISP:Linode LLC  TLD:org CountryCode:GB

    The description :proudly uncool and out of fashion skip to content home about feed a framework for backuppc ‘s pre/post scripts posted by waldner on 6 may 2015, 2:44 pm this is just one of many possible ways to do thi...

    This report updates in 12-Jun-2018

Created Date:2009-04-11

Technical data of the backreference.org


Geo IP provides you such as latitude, longitude and ISP (Internet Service Provider) etc. informations. Our GeoIP service found where is host backreference.org. Currently, hosted in United Kingdom and its service provider is Linode LLC .

Latitude: 51.508529663086
Longitude: -0.12574000656605
Country: United Kingdom (GB)
City: London
Region: England
ISP: Linode LLC

HTTP Header Analysis


HTTP Header information is a part of HTTP protocol that a user's browser sends to called nginx containing the details of what the browser wants and will accept back from the web server.

Content-Encoding:gzip
Transfer-Encoding:chunked
Strict-Transport-Security:max-age=31536000; includeSubDomains
Server:nginx
Connection:keep-alive
Link:; rel="https://api.w.org/"
Date:Tue, 12 Jun 2018 14:04:27 GMT
Content-Type:text/html; charset=UTF-8

DNS

soa:ns1.he.net. hostmaster.he.net. 2018060603 10800 1800 604800 86400
ns:ns2.he.net.
ns5.he.net.
ns4.he.net.
ns3.he.net.
ipv4:IP:109.74.198.107
ASN:63949
OWNER:LINODE-AP Linode, LLC, US
Country:GB
ipv6:2a01:7e00::f03c:91ff:fe96:96f3//63949//LINODE-AP Linode, LLC, US//GB
txt:"v=spf1 a mx ip4:109.74.198.107/32 ip6:2a01:7e00::f03c:91ff:fe96:96f3/128 include:spf.migadu.com ~all"
mx:MX preference = 20, mail exchanger = aspmx2.migadu.com.
MX preference = 10, mail exchanger = aspmx1.migadu.com.

HtmlToText

proudly uncool and out of fashion skip to content home about feed a framework for backuppc ‘s pre/post scripts posted by waldner on 6 may 2015, 2:44 pm this is just one of many possible ways to do this. the idea is to put something like this in the configuration of each host that is being backed up: $conf{dumppreusercmd} = [ '/usr/local/bin/backuppc_prescript.sh', '-c', '$client', '-h', '$host', '-i', '$hostip', '-x', '$xfermethod', '-t', '$type', ]; $conf{dumppostusercmd} = [ '/usr/local/bin/backuppc_postscript.sh', '-c', '$client', '-h', '$host', '-i', '$hostip', '-x', '$xfermethod', '-t', '$type', '-o', '$xferok', ]; this way, each script is passed all the information about the backup that backuppc makes available. each host would have the same configuration, so management of per-host configuration files is easy. here is a sample backuppc_prescript.sh : #!/bin/bash # standard backuppc pre-script client= host= hostip= xfermethod= incrfull= while getopts ':c:h:i:x:t:' opt; do case $opt in c) client=$optarg ;; h) host=$optarg ;; i) hostip=$optarg ;; x) xfermethod=$optarg ;; t) incrfull=$optarg ;; \?) echo "invalid option: -$optarg." >&2 exit 1 ;; :) echo "option -$optarg requires an argument." >&2 exit 1 ;; esac done # exit with error if some argument is missing ( [ "$client" = "" ] || [ "$host" = "" ] || [ "$hostip" = "" ] || [ "$xfermethod" = "" ] || [ "$incrfull" = "" ] ) && exit 1 error=0 # run extra commands for this host, if any # if you want to fail the backup, set error to nonzero here # (or exit nonzero directly) if [ -r "/etc/backuppc/scripts/${host}-pre.sh" ]; then . "/etc/backuppc/scripts/${host}-pre.sh" fi exit $error the backuppc_postscript.sh script follows a similar pattern: #!/bin/bash # standard backuppc post-script client= host= hostip= xfermethod= incrfull= xferok= hostid=host # or client or ip while getopts ':c:h:i:x:t:o:n:w:' opt; do case $opt in c) client=$optarg ;; h) host=$optarg ;; i) hostip=$optarg ;; x) xfermethod=$optarg ;; t) incrfull=$optarg ;; o) xferok=$optarg ;; n) extraname=$optarg ;; w) hostid=$optarg ;; \?) echo "invalid option: -$optarg" >&2 exit 1 ;; :) echo "option -$optarg requires an argument" >&2 exit 1 ;; esac done # exit with error if some argument is missing ( [ "$client" = "" ] || [ "$host" = "" ] || [ "$hostip" = "" ] || [ "$xfermethod" = "" ] || [ "$incrfull" = "" ] || [ "$xferok" = "" ] ) && exit 1 # here we know how the backup went, so do anything appropriate # eg, notify the monitoring system, update some database, whatever # examples zabbix_sender -z zabbix.example.com -s "${host}" -k backup.status -o ${xferok} 2>&1 printf '%s\t%s\t%d\t%s\n' "${host}" backup_status "${xferok}" "" | send_nsca -h nagios.example.com -c /etc/send_nsca.cfg mysql -h mysql.example.com -e "insert into history.backups (date, host, type, status) values (now(), '${host}', ${incrfull}, ${xferok});" error=0 # run extra commands for this host, if any # if you want to fail the backup, set "error" to nonzero inside this script # (or exit nonzero directly) if [ -r "/etc/backuppc/scripts/${host}-post.sh" ]; then . "/etc/backuppc/scripts/${host}-post.sh" fi exit $error everything shown up to here is fixed. if some host needs to run some extra or special task, then, as can be seen, it's enough to drop a <hostname>-pre.sh or/and <hostname>-post.sh script into /etc/backuppc/scripts (or wherever, really) for it to be run. note that these scripts are sourced , so they have access to the same complete information (environment) the caller has about backup status, backup type and so on. they can also crucially set the $error variable, thus they can decide the overall success or failure status for the backup (assuming $conf{usercmdcheckstatus} is set to 1 in the main configuration, of course, as it is by default). filed under linux , shell , tips , worksforme . tagged backuppc , monitoring , scripting comment detecting empty files in awk posted by waldner on 7 april 2015, 1:47 pm we have an awk script that can process multiple files, and we want to do some special task at the beginning of each file (for this example we just print the file name, but it can be anything of course). the classic awk idiom to do this is something like function process_file(){ print "processing " filename "..." } fnr == 1 { process_file() } # rest of code so we call our script with three files and get: $ awk -f script.awk file1 file2 file3 processing file1... processing file2... processing file3... alright. but what happens if some file is empty? let's try it (we use /dev/null to simulate an empty file): $ awk -f script.awk file1 /dev/null file3 processing file1... processing file3... right, since an empty file has no lines, it can never match fnr == 1 , so for the purposes of our per-file processing task it's effectively skipped. depending on the exact needs, this may or may not be acceptable. usually it is, but what if we want to be sure that we run our code for each file, regardless of whether it's empty or not? gnu awk if we have gnu awk and can assume it's available anywhere our script will run (or can force it as a prerequisite for users), then it's easy: just use the special beginfile block instead of fnr == 1 . function process_file(){ print "processing " filename "..." } beginfile { process_file() } (btw, gnu awk also has a corresponding endfile special block.) and there we have it: $ gawk -f script.awk file1 /dev/null file3 processing file1... processing /dev/null... processing file3... but alas, for the time being this is not standard, so it can only run with gnu awk. standard awk with standard awk, we have to stick to what is available, namely the fnr == 1 condition. if our process_file function is executed, then we know we're seeing a non-empty file. so our only option is, within this function, to check whether some previous file has been skipped and if so, catch up with their processing. how do we do this check? well, awk stores all the arguments to the program in the argv[] array, so we can keep our own pointer to the index of the expected "current" file being processed and check that it matches filename (which is set by awk and always matches the current file); if they are not the same, it means some previous file was skipped, so we catch up. first version of our processing function (we choose to ignore the lint/style issue represented by the fact that passing a global variable to a function that accepts a parameter of the same name shadows it, as it's totally harmless here and improves code readability): function process_it(filename, is_empty) { print "processing " filename " (" (is_empty ? "empty" : "nonempty") ")..." } function process_file(argind) { argind++ # if argv[argind] differs from filename, we skipped some files. catch up while (argv[argind] != filename) { process_it(argv[argind], 1) argind++ } # finally, process the current file process_it(argv[argind], 0) return argind } begin { argind = 0 } fnr == 1 { argind = process_file(argind) } # rest of code here (the index variable is named argind . the name is not random; gnu awk has an equivalent built-in variable, called argind ) let's test it: $ awk -f script.awk file1 /dev/null file3 processing file1 (nonempty)... processing /dev/null (empty)... processing file3 (nonempty)... $ awk -f script.awk /dev/null /dev/null file3 processing /dev/null (empty)... processing /dev/null (empty)... processing file3 (nonempty)... $ awk -f script.awk file1 /dev/null /dev/null processing file1 (nonemtpy)... $ # oops... so there's a corner case where it doesn't work, namely where the last file(s) are all empty: since there's no later non-empty file, our function doesn't get any further chance to be called to catch up. this can be fixed: we just call our function from the end block. when we're called from the end block, we just process all the arguments that haven't been processed (that is, from argind to argc - 1 ), if any (these would all be empty files). revised code: function process_it(filename, is_empty) { print "processing " filename " (" (is_empty ? "empty" : "nonempty") ")..." } function process_file(argind, end) { argind++ if (end) { for(; argind <= argc - 1; argind++) # we had empty files at the end of arguments process_it(argv[argind], 1) return argind } else { # if argv[argind] differs from filename, we skipped some files. catch up while (argv[argind] != filename) { process_it(argv[argind], 1) argind++ } # finally, process the current file process_it(argv[argind], 0) return argind } } begin { argind = 0 } fnr == 1 { argind = process_file(argind, 0) } # rest of code here... end { argind = process_file(argind, 1) # here argind == argc } let's test it again: $ awk -f script.awk file1 /dev/null file3 processing file1 (nonempty)... processing /dev/null (empty)... processing file3 (nonempty)... $ awk -f script.awk /dev/null /dev/null file3 processing /dev/null (empty)... processing /dev/null (empty)... processing file3 (nonempty)... $ awk -f script.awk file1 /dev/null /dev/null processing file1 (nonempty)... processing /dev/null (empty)... processing /dev/null (empty)... $ awk -f script.awk /dev/null /dev/null /dev/null processing /dev/null (empty)... processing /dev/null (empty)... processing /dev/null (empty)... but wait, we aren't done yet! $ awk -f script.awk file1 /dev/null a=10 file3 processing file1 (nonempty)... processing /dev/null (empty)... processing a=10 (empty)... processing file3 (nonempty)... that is, awk allows mixing filenames and variable assignments in the argument list. this is really a feature as it allows, for example, to modify fs between files. here's the relevant text from the standard: an operand that begins with an <underscore> or alphabetic character from the portable character set [...], followed by a sequence of underscores, digits, and alphabetics from the portable character set, followed by the '=' character, shall specify a variable assignment rather than a pathname. but this also means that we, in our processing, should detect assignments and not treat them as if they were filenames. based on the above rules, we can write a function that checks whether its argument is or not an assignment, and use it to decide whether an argument should be processed. final code that includes this check: function is_assignment(s) { return (s ~ /^[_a-za-z][_a-za-z0-9]*=/) } function process_it(filename, is_empty) { if (! is_assignment(filename)) print "processing " filename " (" (is_empty ? "empty" : "nonempty") ")..." } function process_file(argind, end) { argind++ if (end) { for(; argind <= argc - 1; argind++) # we had empty files at the end of arguments process_it(argv[argind], 1) return argind } else { # if argv[argind] differs from filename, we skipped some files. catch up while (argv[argind] != filename) { process_it(argv[argind], 1) argind++ } # finally, process the current file process_it(argv[argind], 0) return argind } } begin { argind = 0 } fnr == 1 { argind = process_file(argind, 0) } # rest of code here... end { argind = process_file(argind, 1) # here argind == argc } final tests: $ awk -f script.awk file1 /dev/null a=10 file3 processing file1 (nonempty)... processing /dev/null (empty)... processing file3 (nonempty)... $ awk -f script.awk file1 /dev/null a=10 /dev/null processing file1 (nonempty)... processing /dev/null (empty)... processing /dev/null (empty)... $ awk -f script.awk /dev/null a=10 /dev/null file1 processing /dev/null (empty)... processing /dev/null (empty)... processing file1 (nonempty)... # now we have an actual file called a=10 $ awk -f script.awk /dev/null ./a=10 /dev/null file1 processing /dev/null (empty)... processing ./a=10 (nonempty)... processing /dev/null (empty)... processing file1 (nonempty)... filed under awk , faq , shell , tips . tagged awk , empty files comment (semi-)automated ~/.ssh/config management posted by waldner on 8 march 2015, 1:36 pm following up from here , a concrete application of the technique sketched at the end of that article. considering that it's a quick and dirty hack, and that the configuration format was conjured up from scrath in 10 minutes, it has worked surprisingly well so far (for what it has to do). it's also a highly ad-hoc hack, which means that it will be absolutely useless (at least, without making more or less heavy changes) in a lot of environments. the idea: automated generation of the ~/.ssh/config file (which is just like /etc/ssh/ssh_config , but per-user). as anyone who has used ssh more than a few times perfectly knows (or should know, though that doesn't always seems to be the case), having to repeatedly type every time ssh -p 1234 -a [email protected] is not at all the same as typing (for example) ssh s01pd1 that's one of the main reasons for using the ~/.ssh/config file, of course: creating easier aliases for otherwise complicated and/or long hostnames (and at the same time being able to supply extra options like username, port etc. without having to type them on the command line). for the above, one could put this in the config: host s01pd1 user root port 1234 forwardagent yes hostname s01.paris.dc1.example.com since the ssh client checks this file even before attempting dns resolution, we have accomplished our goal of reducing the amount of keystrokes to type for this connection (and, consequently, reduced the likeliness of typos and the time needed to type it). however, in certain environments machines come and go rapidly, and manually editing the file each time to keep it up-to-date is tedious and error-prone (and, furthermore, there are often groups of machines with the same configuration). starting from a plain list of hostnames, it's easy to programmatically generate a ~/.ssh/config file. however we don't simply want the hostnames replicated, we also want to have (short!) aliases for each host. so that's the first desired feature: creating an alias for each host, following some fixed rule. how exactly the alias is generated from the fqdn can vary depending on what looks and feels easiest, most logical or most convenient for the user, so the mechanism should allow for the definition of "rules". but there's no need to invent something new; these rules can be based on the good old regular expression syntax, which is surely well suited for the task. the second problem to solve is that, for a lot of reasons, there will surely have to be host entries in the ~/.ssh/config file that do not easily lend themselves to be automatically generated (because the ssh port is different, because the username is different, because for this one machine x forwarding is needed, because it needs ad-hoc crypto parameters, because there's no obvious transformation pattern to use to generate the short name, because the machine is not part of any group, because... a lot of other reasons). in other words, it must be possible to keep a number of manually maintained entries (hopefully few, but of course it depends) which should not be touched when the file is subject to automated (re)generation. this problem is solved by creating a "safe" zone in the file, delimited by special comment markers. when regenerating the file, the contents of the safe zone are preserved and copied verbatim, so manual changes must be done inside this area. due to the way ssh looks for values in the file (value from first matching entry is used), the safe zone is located at the end of the file, so for example it's possible to set more specific per-host values in the automatically generated part, and finally set general defaults (eg host * and so on) in the safe zone. so our skeleton to be used as starting point for the (semi-)automatically managed ~/.ssh/config is something like this: #### begin safe zone #### # put manually mantained entries here, they will be preserved. #### end safe zone #### when starting from scratch, the above section will be included anyway (albeit empty) in the generated file. if you want to use an existing ~/.ssh/config as starting point, add the above special comment markers at its beginning and end, effectively turning the whole file into a safe zone. later refining is always possible, so better safe than sorry. now, for the actual host definitions, we can use a very simple file format. hosts can be divided into groups , where hosts belonging to the same group share (at least) the same dns domain. this is totally arbitrary; after all, as said, we're talking about an ad-hoc thing. more shared options can be specified, as we'll see. for each host group a list of (unqualified) hostnames is given, space-separated, followed (separated by a colon - ":") by a domain to be appended to the hostname. this is the bare minimum; with this we get the obvious output, so for example starting from # paris dc1 hosts server01 server02 mysql01 : paris.dc1.example.com # frankfurt dc2 hosts mongo01 mongo02 : frankfurt.dc2.example.com we get (username "root" is assumed by default, another arbitrary decision): host server01 hostname server01.paris.dc1.example.com user root host server02 hostname server02.paris.dc1.example.com user root host mysql01 hostname mysql01.paris.dc1.example.com user root host mongo01 hostname mongo01.frankfurt.dc2.example.com user root host mongo02 hostname mongo02.frankfurt.dc2.example.com user root so at least we save ourself the hassle of typing the username and the fqdn (ie, we can do " ssh server01 " instead of " ssh [email protected] "). not bad. but life isn't always that easy, and some day there might be another "server01" host in some other domain (host group), at which point "ssh server01" would cease to be useful. so we use a third field to specify a (optional, but highly recommended) "transformation" expression (in the form of perl's s/// operator) which is applied to the unqualified hostname to derive the final alias for each host. this way, we can create (for example) " server01p1 " and " server01f2 " as aliases for the one in dc1 paris and the one in dc2 frankfurt respectively and restore harmony in the world (if it only were so easy). so we can do this: # paris dc1 hosts server01 server02 mysql01 : paris.dc1.example.com : s/$/p1/ # frankfurt dc2 hosts server01 mongo01 : frankfurt.dc2.example.com : s/$/f2/ to get: host server01p1 hostname server01.paris.dc1.example.com user root host server02p1 hostname server02.paris.dc1.example.com user root host mysql01p1 hostname mysql01.paris.dc1.example.com user root host server01f2 hostname server01.frankfurt.dc2.example.com user root host mongo01f2 hostname mongo01.frankfurt.dc2.example.com user root now we have to type two characters more, but it's still a lot better than the full fqdn and allows us to distinguish between the two "server01". if the hosts share some other common options, they can be added starting from the fourth field. for example, a group of office switches that only support old weak crypto algorithms and need username "admin" (not an infrequent case): # crappy office switches sw1 sw2 sw3 sw4 : office.int : s/^/o/ : ciphers 3des-cbc : macs hmac-sha1 : kexalgorithms diffie-hellman-group1-sha1 : user admin now gives: host osw1 ciphers 3des-cbc macs hmac-sha1 kexalgorithms diffie-hellman-group1-sha1 user admin hostname sw1.office.int host osw2 ciphers 3des-cbc macs hmac-sha1 kexalgorithms diffie-hellman-group1-sha1 user admin hostname sw2.office.int host osw3 ciphers 3des-cbc macs hmac-sha1 kexalgorithms diffie-hellman-group1-sha1 user admin hostname sw3.office.int host osw4 ciphers 3des-cbc macs hmac-sha1 kexalgorithms diffie-hellman-group1-sha1 user admin hostname sw4.office.int within the extra options, simple interpolation of the special escape sequences %h and %d is supported, similar to what ssh does in its config files (though %d is not supported there): %h is replaced with the (unqualified) hostname, %d with the domain. this makes it possible to say: # paris dc1 hosts, behind firewall server01 server02 server03 : paris.dc1.example.com : s/$/p1/ : proxycommand ssh admin@firewall.%d nc %h 22 and have the following automatically generated: host server01p1 proxycommand ssh [email protected] nc server01 22 user root host server02p1 proxycommand ssh [email protected] nc server02 22 user root host server03p1 proxycommand ssh [email protected] nc server03 22 user root (yes, there is a very limited amount of rudimentary extra-option parsing, for example to avoid producing a hostname option - which would be harmless, anyway - if proxycommand is present.) for more on the proxycommand directive, see for example here . so the generic format of the template used to define hosts is: #host(s) : domain [ : transformation [ : extra_opt_1 ] [ : extra_opt_2 ] ... [ : extra_opt_n ] ] # first 2 are mandatory, although domain can be empty comments and empty lines are ignored. spaces around the field-separating colons can be added for readability but are otherwise ignored. if no domain should be appended (for example because it's automatically appended as part of the host's domain resolution mechanism) the domain field can be left empty. similarly, if no transformation is desired, the transformation field can be left empty to mean "apply no transformation" (the bare unqualified hostname will directly become the alias). we assume this template file with host definitions is saved in ~/.ssh_config_hosts . adapt the code as needed. as mentioned, the automatically generated host blocks are placed before the safe zone, which is always preserved. here's the code to regenerate ~/.ssh/config starting from the host definitions in the format explained above and an (optional) existing ~/.ssh/config . warning: this code directly overwrites the existing ~/.ssh/config file, so it's higly advised to make a backup copy before starting to experiment. output to stdout can also be enabled (see comment in the code) to visually check the result without overwriting. #!/usr/bin/perl use warnings; use strict; my $tpl_file = "$env{home}/.ssh_config_hosts"; my $config_file = "$env{home}/.ssh/config"; my @staticpart = (); my @generatedpart = (); my $beg_safepat = '#### begin safe zone ####'; my $end_safepat = '#### end safe zone ####'; # read safe section of the config file (to be preserved) if (-f $config_file) { open(my $confr, "<", $config_file) or die "cannot open $config_file for reading: $!"; my $insafe = 0; while (<$confr>) { if (/^$beg_safepat$/) { $insafe = 1; next; } if (/^$end_safepat$/) { $insafe = 0; last; } next if not $insafe; push @staticpart, $_; } close($confr) or die "cannot close $config_file: $!"; } # read host template open(my $tplr, "<", $tpl_file) or die "cannot open template $tpl_file for reading: $!"; while (<$tplr>) { # skip empty lines and comments next if /^\s*(?:#.*)?$/; chomp; s/\s*#.*//; my ($hlist, $domain, $transf, @extra) = split(/\s*:\s*/); my @hosts = split(/\s+/, $hlist); for my $host (@hosts) { my $entry = ""; my $alias = $host; if ($transf) { eval "\$alias =~ $transf;"; } $entry = "host $alias"; my %opts = (); for (@extra) { # minimal %h/%d interpolation for things like proxycommand etc... (my $extra = $_) =~ s/%h/$host/g; $extra =~ s/%d/$domain/g; $entry .= "\n$extra"; my ($op) = $extra =~ /^(\s+)/; $opts{lc($op)} = 1; } if (!exists($opts{proxycommand})) { $entry .= "\nhostname $host" . ($domain ? ".$domain" : ""); } if (!exists($opts{user})) { $entry .= "\nuser root"; } push @generatedpart, $entry; } } close($tplr) or die "cannot close template $tpl_file: $!"; # write everything out to $config_file open(my $confw, ">", $config_file) or die "cannot open $config_file for writing: $!"; # use this to send to stdout instead #my $confw = *stdout; print $confw "#########################################################################\n"; print $confw "# the following entries are automatically generated, do not change them\n"; print $confw "# directly. instead change the file $tpl_file\n"; print $confw "# and run $0 to regenerate them.\n"; print $confw "#########################################################################\n\n"; # generated part, each item is a host block print $confw (join("\n\n", @generatedpart), "\n\n"); # static part (safe zone) for ("$beg_safepat\n", @staticpart, "$end_safepat\n") { print $confw $_; } print $confw "\n"; close($confw) or die "cannot close $config_file: $!"; exit; filed under linux , shell , tips , worksforme . tagged configuration management , kludges , perl , ssh 4 comments remote-to-remote data copy posted by waldner on 9 february 2015, 9:14 am ...going through the local machine, which is what people normally want and try to do. of course it's not as efficient as a direct copy between the involved boxes, but many times it's the only option, for various reasons. here are some ways (some with standard tools, some home-made) to accomplish the task. we'll indicate the two remote machines between which data has to be transferred with remote1 and remote2 . we assume no direct connectivity between them is possible, but we have access to both from the local machine (with passwordless ssh where appropriate). remote1 to local, local to remote2 this is of course the obvious and naive way: just copy everything temporarily from remote1 to the local machine (with whatever method), then again from the local machine to remote2 . if copying remote-to-remote is bad, doing it this way is even worse, as we actually need space on the local machine to store the data, albeit only temporarily. sample code using rsync (options are only indicative): $ rsync -avz remote1:/src/dir/ /local/dir/ sending incremental file list ... $ rsync -avz /local/dir/ remote2:/dest/dir/ sending incremental file list ... for small or even medium amounts of data this solution can be workable, but it's clearly not very satisfactory. scp -3 newer versions of scp have a command line switch ( -3 ) which does just wat we want: remote-to-remote copy going through the local machine. in this case at least, we don't need local disk space: $ scp -3 -r remote1:/src/dir remote2:/dest/dir # recursive to copy everything; adapt as needed an annoying "feature" of scp -3 is that there's no indication of progress whatsoever (whereas the default for non-remote-to-remote copy is to show progress of each file as it's copied), and no option to enable it. sure, with -v that information is printed, but so is a lot of other stuff. ssh + tar we can also of course use ssh and tar: $ ssh remote1 'tar -c /src/dir/ -cvzf - .' | ssh remote2 'tar -c /dest/dir/ -xzvf -' tar + netcat/socat can we modify our nettar tool to support remote-to-remote copies? the answer is yes, and here's the code for a generalized version that automatically detects whether local-to-remote, remote-to-local or remote-to-remote copy is desired. this version uses socat instead of netcat, which implies that socat must be installed on the involved remote machines, as well as on the local box. it also implies that traffic is allowed between the local box and the remote ones on the remote tcp port used (in this example 1234). #!/bin/bash # nettar_gen.sh # copy directory trees between local/remote and local/remote, using tar + socat # usage: $0 src dst # if either src or dst contain a colon, it's assumed to mean machine:path, otherwise assumed # local path # examples # # $0 remote:/src/dir /local/dst # $0 /local/src remote:/dst/dir # $0 remote1:/src/dir remote2:/dst/dir # note: error checking is very rudimentary. argument sanity checking is missing. src=$1 dst=$2 port=1234 remotesrc=0 remotedst=0 user=root if [[ "$src" =~ : ]]; then remotesrc=1 srcmachine=${src%%:*} srcdir=${src#*:} if ! ssh "$user"@"$srcmachine" "cd '$srcdir' || exit 1; { tar -cf - . | socat - tcp-l:$port,reuseaddr ; } </dev/null >/dev/null 2>&1 &"; then echo "error setting up source on $srcmachine" >&2 exit 1 fi fi if [[ "$dst" =~ : ]]; then remotedst=1 dstmachine=${dst%%:*} dstdir=${dst#*:} if ! ssh "$user"@"$dstmachine" "cd '$dstdir' || exit 1; { socat tcp-l:$port,reuseaddr - | tar -xf - ; } </dev/null >/dev/null 2>&1 &"; then echo "error setting up destination on $dstmachine" >&2 exit 1 fi fi # sometimes remote initialization takes a bit longer... sleep 0.5 if [ $remotesrc -eq 0 ] && [ $remotedst -eq 0 ]; then # local src, local dst tar -cf - -c "$src" . | tar -xvf - -c "$dst" elif [ $remotesrc -eq 0 ]; then # local src, remote dst tar -cvf - -c "$src" . | socat - tcp:"$dstmachine":$port elif [ $remotedst -eq 0 ]; then # remote src, local dst socat tcp:"$srcmachine":$port - | tar -xvf - -c "$dst" else # remote src, remote dst socat tcp:"$srcmachine":$port - | socat - tcp:"$dstmachine":$port fi so with this code we can say $ nettar_gen.sh remote1:/src/dir remote2:/dst/dir and transfer the files unencrypted without the overhead of ssh (as tar runs remotely, we won't be able to see the names of the files being transferred though). compression can be added to tar if desired (not always makes things faster, so it might or might not be an improvement). real rsync? the approaches so far (except the first one, which however has other drawbacks) have the problem that they are not incremental, so if a transfer is interrupted, we have to restart it from the beginning (ok, we can cheat and move or delete the already-copied data on the origin, so it doesn't have to be copied again, but it should be obvious that this is neither an optimal nor a desirable workaround). the tool of choice when we need to resume partial transfers is, of course, rsync but, as the man page kindly informs us, rsync copies files either to or from a remote host, or locally on the current host (it does not support copying files between two remote hosts). however, we can leverage ssh's port forwarding capabilities and "bring", so to speak, a "tunnel" to remote1 that connects to remote2 via the local machine, for example: $ ssh -r10000:remote2:10000 remote1 if we do the above, anything sent to localhost:10000 on remote1 will be sent to port 10000 on remote2 . in particular, we can forward to port 22 on remote2 (or whatever port ssh is using there): $ ssh -r10000:remote2:22 remote1 now " ssh -p 10000 localhost " on remote1 gives us a password request from remote2 's ssh daemon. so, since rsync runs over ssh, with this tunnel in place we can run this on remote1 (all the examples use root as the user on remote2, adapt as needed): remote1$ rsync -e 'ssh -l root -p 10000' -avz /src/dir/ localhost:/dest/dir/ and we'll effectively be transferring stuff to remote2 . we can run the above directly from the local box (the -t option to ssh is to force a pseudo-tty allocation, otherwise we couldn't be asked for the password): $ ssh -t -r10000:remote2:22 remote1 'rsync -e "ssh -l root -p 10000" -avz /src/dir/ localhost:/dest/dir/' the authenticity of host '[localhost]:10000 ([127.0.0.1]:10000)' can't be established. ed25519 key fingerprint is 9a:fd:f3:7f:55:1e:6b:44:b2:88:fd:a3:e9:c9:b9:ed. are you sure you want to continue connecting (yes/no)? yes warning: permanently added '[localhost]:10000' (ed25519) to the list of known hosts. root@localhost's password: sending incremental file list ... so this way we get almost what we wanted, except we're still prompted for a password (which, as should be clear by now, is really the password for root@remote2 ). this is expected, since remote1 has probably no relation whatsoever with remote2 (we are also asked to accept remote2's ssh host key). although this solution is already quite satisfactory, can we do better? the answer is: yes. an option is to set up passwordless ssh between remote1 and remote2 , so we need to install the appropriate ssh keys on remote1 's ~/.ssh directory (and adapt the -e option to rsync to use them, if necessary). this may or may not be ok, depending on the exact circumstances, but in any case it still requires changes on remote1 , which may not be desirable (it also requires further work if some day we want to transfer, say, between remote1 and remote3, and for any new remote we want to work with). can't we somehow exploit the fact that we do have passwordless ssh already to both remotes from out local machine? the answer is again: yes. if we use ssh-agent (or gpg-agent , which can store ssh keys as well), whose job is to read and store private ssh keys, we can then take advantage of the -a option to ssh (can also be specified in ~/.ssh/config as forwardagent ) to forward our agent connection to remote1 ; there, the keys stored by the agents will be accessible and thus usable to get passwordless login on remote2 (well, on [localhost]:10000 actually, which is how remote1 will see it). simply put, forwarding the ssh agent means that all ssh authentication challenges will be forwarded to the local machine, so in particular it is possible to take advantage of locally available keys even for authentications happening remotely. here is a very good description of the process. (and be sure to read und understand the implications of using -a as explained in the man page.) with a running agent with knowledge of the relevant key on the local machine and agent forwarding, we can finally have a seamless remote-to-remote rsync: $ ssh -t -a -r10000:remote2:22 remote1 'rsync -e "ssh -l root -p 10000" -avz /src/dir/ localhost:/dest/dir/' an annoyance with this approach is that, since remote1 stores the host key of [localhost]:10000 in its ~/.ssh/known_hosts file, if we do this: $ ssh -t -a -r10000: remote2 :22 remote1 'rsync -e "ssh -l root -p 10000" -avz /src/dir/ localhost:/dest/dir/' and then this: $ ssh -t -a -r10000: remote3 :22 remote1 'rsync -e "ssh -l root -p 10000" -avz /src/dir/ localhost:/dest/dir/' ssh will complain loudly and rightly that the key for localhost:10000 has changed. a workaround, if this kind of operation is needed frequently, is to set up some sort of mapping between remote hosts and ports used on remote1 (and stick to it). a slightly better method could be to cleanup the relevant entry from remote1 's ~/.ssh/known_hosts file just before starting the transfer (eg with sed or some other tool), and then use stricthostkeychecking=no to have the key automatically added without confirmation, for example: # cleanup, then do the copy $ ssh remote1 'sed -i "/^\[localhost\]:10000 /d" .ssh/known_hosts' $ ssh -t -a -r10000:remote2:22 remote1 'rsync -e "ssh -l root -p 10000 -o stricthostkeychecking=no" -avz /src/dir/ localhost:/dest/dir/' warning: permanently added '[localhost]:10000' (ed25519) to the list of known hosts. sending incremental file list ... update 13/02/2015 : it turns out that ssh-keygen , despite its name, has an option ( -r ) to remove a host key from the known_hosts file, so it can be used insted of sed in the above example: # ssh-keygen -r '[localhost]:10000' # host [localhost]:10000 found: line 21 type ed25519 /root/.ssh/known_hosts updated. original contents retained as /root/.ssh/known_hosts.old however, it leaves behind a file with the .old suffix, and outputs a message which can't be suppressed with -q , despite what the man page says, so one would need to resort to shell redirection if silent operation is wanted. filed under networking , shell , tips , worksforme . tagged remote copy , rsync , scp , socat , ssh comment the mythical “idempotent” file editing posted by waldner on 10 january 2015, 11:27 am the story goes more or less like this: " i want to edit a file by adding some lines, but leaving alone any other lines that it might already have. if one of the to-be-added lines is already present, do not re-add it (or replace the existing one). i should be able to repeat this process an arbitrary number of times; after the first run, any subsequent run must leave the file unchanged " (hence "idempotent"). for some reason, a typical target for this kind of thing seems to be the file /etc/hosts , and that's what we'll be using here for the examples. adapt as needed. other common targets include /etc/passwd or dns zone files. note that there are almost always ways to avoid doing what we're going to do. a typical scenario cited by proponents of this approach is automated or scripted install of a machine where a known state for /etc/hosts is desired. but in that case, one can just create the file from scratch with appropriate contents (we are provisioning, right?). creating the file from scratch certainly leaves it with the desired contents, and is surely idempotent (can be repeated as many times as wanted). another scenario is managing/maintaining such file on an already installed machine. but if you really need to do that, there are tools ( puppet has a /etc/hosts type , augeas can edit most common file types, etc.) that can do it natively and well (well, at least most likely better than a script). so in the end it's almost always a half-baked attempt at doing something that either shouldn't be necessary in the first place, or should be done with the appropriate tools. nevertheless, there seem to be a lot of people trying to do this, so for the sake of it, let's see how the task could be approached. to make it concrete, here's our existing (pre-edit) /etc/hosts : # # /etc/hosts: static lookup table for host names # 127.0.0.1 my.example.com localhost.localdomain my localhost ::1 localhost.localdomain localhost 192.168.44.12 server1.example.com server1 192.168.44.1 firewall.example.com firewall # the following lines are desirable for ipv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 2001:db8:a:b::1 server6.example.com server6 # end of file we want to merge the following lines (we assume they are stored in their own file, newlines.txt ): 192.168.44.250 newserver.example.com newserver 192.168.44.1 firewall.example.com firewall gateway 2001:db8:a:b::100 server7.example.com server7 when one of the lines we're adding is already present in the target file, there are two possible policies: either leave the line alone (ie, the old line is the good one), or replace it (ie, the new line is the good one). in our example, we would encounter this issue with the 192.168.44.1 entry. of course, it's not hard to imagine situations in which for just some of the new lines the "new line wins" policy should be used, while still using the "old line wins" policy for the remaining ones. we choose to ignore this problem here and use a global policy, but it's certainly not just a theoretical case. another issue has to do with the method used to detect whether a line is already present: do we compare the whole line, just a key field (somehow calculated, for example a column), a set of fields, or yet something else? if we use more than one field, what about spaces? in the case of /etc/hosts it seems sensible to use the first column (ie, the actual ip address) as a key, but it could be argued that the second field (the fqdn) should be used instead, as we want to ensure that a given fqdn is resolvable, no matter to which ip address (this in turn has the problem that then we can't add an ipv4 and ipv6 line for the same fqdn). here we're using the first field; again, adaptation will be necessary for different needs. another, more serious issue, has to do with the overall format of the resulting file. what do we do with comments and empty lines? in this case, we just print them verbatim. and what about internal file "semantics" (for lack of a better term)? let's say we like to have all ipv4 addresses nicely grouped together and all ipv6 addresses as well. new lines should respect the grouping (an ipv4 line should go into the ipv4 group etc.). now things start to be, well, "interesting". since where a line appears in the file doesn't really matter much to the resolver routines, here we choose to just append new lines at the end; but this is a very simple (and, for some "idempotent" editing fans probably unsatisfactory) policy. the point is: it's easy to see how this seemingly easy task can quickly become arbitrarily (and ridiculously) complicated, and any "quick and dirty" solution necessarily has to deal with many assumptions and tradeoffs. (and all this just for the relatively simple file /etc/hosts. imagine managing a dns zone file, or a dhcp server configuration file, with mac to ip mappings, just to name some other examples. and we're still in the domain of single-line-at-a-time changes.) so here's some awk code that tries to do the merge. whether the "existing/old line wins" policy or the "new line wins" policy is used is controlled with a flag ( newwins ) that can be set with -v, and by default is set to 0 (old line wins): begin { # awk way to check whether a variable is not defined if (newwins == "" && newwins == 0) { newwins = 0 # by default old line wins } } # load new lines, skip empty/comment lines nr == fnr { if (!/^[[:blank:]]*(#|$)/) { ip = substr($0, 1, index($0, " ") - 1) newlines[ip] = $0 } next } # print comments and empty lines verbatim /^[[:blank:]]*(#|$)/ { print next } $1 in newlines { print (whowins == 1) ? newlines[$1] : $0 # either way, forget it delete newlines[$1] next } { print } # if anything is left in newlines, they must be truly new lines end { for (ip in newlines) print newlines[ip] } so we can run it as follows ("old line wins" policy, only two new lines appended at the end): $ awk -f mergehosts.awk newlines.txt /etc/hosts # # /etc/hosts: static lookup table for host names # 127.0.0.1 my.example.com localhost.localdomain my localhost ::1 localhost.localdomain localhost 192.168.44.12 server1.example.com server1 192.168.44.1 firewall.example.com firewall # the following lines are desirable for ipv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 2001:db8:a:b::1 server6.example.com server6 # end of file 2001:db8:a:b::100 server7.example.com server7 192.168.44.250 newserver.example.com newserver or with the "new line wins" policy (same two lines appended, and an existing one replaced with the new version): $ awk -f mergehosts.awk -v newwins=1 newlines.txt /etc/hosts # # /etc/hosts: static lookup table for host names # 127.0.0.1 my.example.com localhost.localdomain my localhost ::1 localhost.localdomain localhost 192.168.44.12 server1.example.com server1 192.168.44.1 firewall.example.com firewall gateway # the following lines are desirable for ipv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 2001:db8:a:b::1 server6.example.com server6 # end of file 2001:db8:a:b::100 server7.example.com server7 192.168.44.250 newserver.example.com newserver (to actually change the original file, redirect the output to a temporary file and use it to overwrite the original one. let's not start that discussion again ). not looking good? well, it's kind of expected, since it's a ugly hack. it does work under the assumptions, but it's nonetheless a hack. as said, it's higly dependent on the use case, but in general a better solution with this kind of problems is to either generate the whole file from scratch every time (including from templates if appropriate), or use dedicated tools to manage it. it can also be mentioned that, if one must really do it using a script, it's often possible and easy enough to divide the target file into "zones" (for example, using special comment markers). in this way, within the same file, one zone could be deemed "safe" and reserved for hand-created content that should be preserved, and nother zone for automated content (that is, erased and recreated from scratch each time). however this approach assumes that the whole of the automated content is always supplied each time. this approach (slightly less hackish) introduces its own set of considerations, and is interesting enough to deserve an article on its own. filed under awk , faq , tips , worksforme . tagged configuration management , editing , idempotency , kludges 1 comment « older entries btc bitcoinate search for: recent posts a framework for backuppc ‘s pre/post scripts detecting empty files in awk (semi-)automated ~/.ssh/config management remote-to-remote data copy the mythical “idempotent” file editing categories awk (30) faq (22) games (1) linux (46) networking (38) ruby (2) sed (14) shell (69) tips (69) worksforme (48) tags apache awk bash bind blocks certificate cron csv dns escaping fun grep ip6tables iproute2 iptables ipv4 ipv6 kludges linux matching multihome netcat networking obfuscated code oneliners openssl openvpn perl pipeline port forwarding python ranges routing sed shell socat sorting ssh ssl tap text processing tsig tun tunnel vpn archives may 2015 (1) april 2015 (1) march 2015 (1) february 2015 (1) january 2015 (1) december 2014 (1) november 2014 (1) october 2014 (1) september 2014 (1) august 2014 (1) july 2014 (1) june 2014 (1) may 2014 (1) april 2014 (1) march 2014 (1) february 2014 (1) january 2014 (1) december 2013 (1) november 2013 (1) october 2013 (1) september 2013 (1) august 2013 (1) july 2013 (1) june 2013 (1) may 2013 (1) april 2013 (2) march 2013 (2) february 2013 (1) january 2013 (1) december 2012 (1) november 2012 (1) october 2012 (1) september 2012 (1) august 2012 (2) july 2012 (1) may 2012 (1) april 2012 (1) march 2012 (2) december 2011 (1) november 2011 (1) october 2011 (1) september 2011 (1) august 2011 (1) june 2011 (1) march 2011 (1) february 2011 (1) january 2011 (1) december 2010 (2) november 2010 (2) october 2010 (4) september 2010 (2) august 2010 (2) july 2010 (6) june 2010 (3) may 2010 (5) april 2010 (4) march 2010 (3) february 2010 (4) january 2010 (3) december 2009 (7) november 2009 (7) october 2009 (1) © 2009-2018 davide brini - https://backreference.org contact powered by wordpress . theme f2 .

URL analysis for backreference.org


https://backreference.org/2015/02/09/remote-to-remote-data-copy/#respond
https://backreference.org/2012/03/
https://backreference.org/tag/perl/
https://backreference.org/2010/01/
https://backreference.org/tag/csv/
https://backreference.org/2014/05/
https://backreference.org/tag/escaping/
https://backreference.org/2011/01/
https://backreference.org/2014/11/
https://backreference.org/tag/monitoring/
https://backreference.org/tag/remote-copy/
https://backreference.org/2015/03/
https://backreference.org/tag/fun/
https://backreference.org/2012/04/
https://backreference.org/2011/09/

Whois Information


Whois is a protocol that is access to registering information. You can reach when the website was registered, when it will be expire, what is contact details of the site with the following informations. In a nutshell, it includes these informations;

Domain Name: BACKREFERENCE.ORG
Registry Domain ID: D157517301-LROR
Registrar WHOIS Server:
Registrar URL: http://www.tucows.com
Updated Date: 2016-10-06T21:44:08Z
Creation Date: 2009-11-04T23:26:02Z
Registry Expiry Date: 2021-11-04T23:26:02Z
Registrar Registration Expiration Date:
Registrar: Tucows Inc.
Registrar IANA ID: 69
Registrar Abuse Contact Email:
Registrar Abuse Contact Phone:
Reseller:
Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
Registry Registrant ID: C114191618-LROR
Registrant Name: Contact Privacy Inc. Customer 0122201446
Registrant Organization: Contact Privacy Inc. Customer 0122201446
Registrant Street: 96 Mowat Ave
Registrant City: Toronto
Registrant State/Province: ON
Registrant Postal Code: M6K3M1
Registrant Country: CA
Registrant Phone: +1.4165385457
Registrant Phone Ext:
Registrant Fax:
Registrant Fax Ext:
Registrant Email: [email protected]
Registry Admin ID: C114191618-LROR
Admin Name: Contact Privacy Inc. Customer 0122201446
Admin Organization: Contact Privacy Inc. Customer 0122201446
Admin Street: 96 Mowat Ave
Admin City: Toronto
Admin State/Province: ON
Admin Postal Code: M6K3M1
Admin Country: CA
Admin Phone: +1.4165385457
Admin Phone Ext:
Admin Fax:
Admin Fax Ext:
Admin Email: [email protected]
Registry Tech ID: C114191618-LROR
Tech Name: Contact Privacy Inc. Customer 0122201446
Tech Organization: Contact Privacy Inc. Customer 0122201446
Tech Street: 96 Mowat Ave
Tech City: Toronto
Tech State/Province: ON
Tech Postal Code: M6K3M1
Tech Country: CA
Tech Phone: +1.4165385457
Tech Phone Ext:
Tech Fax:
Tech Fax Ext:
Tech Email: [email protected]
Name Server: NS2.HE.NET
Name Server: NS3.HE.NET
Name Server: NS4.HE.NET
Name Server: NS5.HE.NET
DNSSEC: unsigned
URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of WHOIS database: 2017-08-25T15:48:05Z <<<

For more information on Whois status codes, please visit https://icann.org/epp

Access to Public Interest Registry WHOIS information is provided to assist persons in determining the contents of a domain name registration record in the Public Interest Registry registry database. The data in this record is provided by Public Interest Registry for informational purposes only, and Public Interest Registry does not guarantee its accuracy. This service is intended only for query-based access. You agree that you will use this data only for lawful purposes and that, under no circumstances will you use this data to: (a) allow, enable, or otherwise support the transmission by e-mail, telephone, or facsimile of mass unsolicited, commercial advertising or solicitations to entities other than the data recipient's own existing customers; or (b) enable high volume, automated, electronic processes that send queries or data to the systems of Registry Operator, a Registrar, or Afilias except as reasonably necessary to register domain names or modify existing registrations. All rights reserved. Public Interest Registry reserves the right to modify these terms at any time. By submitting this query, you agree to abide by this policy.

  REFERRER http://www.pir.org/

  REGISTRAR Public Interest Registry

SERVERS

  SERVER org.whois-servers.net

  ARGS backreference.org

  PORT 43

  TYPE domain

DOMAIN

  NAME backreference.org

  HANDLE D157517301-LROR

  CREATED 2009-04-11

STATUS
clientTransferProhibited https://icann.org/epp#clientTransferProhibited
clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited

NSERVER

  NS2.HE.NET 216.218.131.2

  NS3.HE.NET 216.218.132.2

  NS4.HE.NET 216.66.1.2

  NS5.HE.NET 216.66.80.18

OWNER

  HANDLE C114191618-LROR

  NAME Contact Privacy Inc. Customer 0122201446

  ORGANIZATION Contact Privacy Inc. Customer 0122201446

ADDRESS

STREET
96 Mowat Ave

  CITY Toronto

  STATE ON

  PCODE M6K3M1

  COUNTRY CA

  PHONE +1.4165385457

  EMAIL [email protected]

ADMIN

  HANDLE C114191618-LROR

  NAME Contact Privacy Inc. Customer 0122201446

  ORGANIZATION Contact Privacy Inc. Customer 0122201446

ADDRESS

STREET
96 Mowat Ave

  CITY Toronto

  STATE ON

  PCODE M6K3M1

  COUNTRY CA

  PHONE +1.4165385457

  EMAIL [email protected]

TECH

  HANDLE C114191618-LROR

  NAME Contact Privacy Inc. Customer 0122201446

  ORGANIZATION Contact Privacy Inc. Customer 0122201446

ADDRESS

STREET
96 Mowat Ave

  CITY Toronto

  STATE ON

  PCODE M6K3M1

  COUNTRY CA

  PHONE +1.4165385457

  EMAIL [email protected]

  REGISTERED yes

Go to top

Mistakes


The following list shows you to spelling mistakes possible of the internet users for the website searched .

  • www.ubackreference.com
  • www.7backreference.com
  • www.hbackreference.com
  • www.kbackreference.com
  • www.jbackreference.com
  • www.ibackreference.com
  • www.8backreference.com
  • www.ybackreference.com
  • www.backreferenceebc.com
  • www.backreferenceebc.com
  • www.backreference3bc.com
  • www.backreferencewbc.com
  • www.backreferencesbc.com
  • www.backreference#bc.com
  • www.backreferencedbc.com
  • www.backreferencefbc.com
  • www.backreference&bc.com
  • www.backreferencerbc.com
  • www.urlw4ebc.com
  • www.backreference4bc.com
  • www.backreferencec.com
  • www.backreferencebc.com
  • www.backreferencevc.com
  • www.backreferencevbc.com
  • www.backreferencevc.com
  • www.backreference c.com
  • www.backreference bc.com
  • www.backreference c.com
  • www.backreferencegc.com
  • www.backreferencegbc.com
  • www.backreferencegc.com
  • www.backreferencejc.com
  • www.backreferencejbc.com
  • www.backreferencejc.com
  • www.backreferencenc.com
  • www.backreferencenbc.com
  • www.backreferencenc.com
  • www.backreferencehc.com
  • www.backreferencehbc.com
  • www.backreferencehc.com
  • www.backreference.com
  • www.backreferencec.com
  • www.backreferencex.com
  • www.backreferencexc.com
  • www.backreferencex.com
  • www.backreferencef.com
  • www.backreferencefc.com
  • www.backreferencef.com
  • www.backreferencev.com
  • www.backreferencevc.com
  • www.backreferencev.com
  • www.backreferenced.com
  • www.backreferencedc.com
  • www.backreferenced.com
  • www.backreferencecb.com
  • www.backreferencecom
  • www.backreference..com
  • www.backreference/com
  • www.backreference/.com
  • www.backreference./com
  • www.backreferencencom
  • www.backreferencen.com
  • www.backreference.ncom
  • www.backreference;com
  • www.backreference;.com
  • www.backreference.;com
  • www.backreferencelcom
  • www.backreferencel.com
  • www.backreference.lcom
  • www.backreference com
  • www.backreference .com
  • www.backreference. com
  • www.backreference,com
  • www.backreference,.com
  • www.backreference.,com
  • www.backreferencemcom
  • www.backreferencem.com
  • www.backreference.mcom
  • www.backreference.ccom
  • www.backreference.om
  • www.backreference.ccom
  • www.backreference.xom
  • www.backreference.xcom
  • www.backreference.cxom
  • www.backreference.fom
  • www.backreference.fcom
  • www.backreference.cfom
  • www.backreference.vom
  • www.backreference.vcom
  • www.backreference.cvom
  • www.backreference.dom
  • www.backreference.dcom
  • www.backreference.cdom
  • www.backreferencec.om
  • www.backreference.cm
  • www.backreference.coom
  • www.backreference.cpm
  • www.backreference.cpom
  • www.backreference.copm
  • www.backreference.cim
  • www.backreference.ciom
  • www.backreference.coim
  • www.backreference.ckm
  • www.backreference.ckom
  • www.backreference.cokm
  • www.backreference.clm
  • www.backreference.clom
  • www.backreference.colm
  • www.backreference.c0m
  • www.backreference.c0om
  • www.backreference.co0m
  • www.backreference.c:m
  • www.backreference.c:om
  • www.backreference.co:m
  • www.backreference.c9m
  • www.backreference.c9om
  • www.backreference.co9m
  • www.backreference.ocm
  • www.backreference.co
  • backreference.orgm
  • www.backreference.con
  • www.backreference.conm
  • backreference.orgn
  • www.backreference.col
  • www.backreference.colm
  • backreference.orgl
  • www.backreference.co
  • www.backreference.co m
  • backreference.org
  • www.backreference.cok
  • www.backreference.cokm
  • backreference.orgk
  • www.backreference.co,
  • www.backreference.co,m
  • backreference.org,
  • www.backreference.coj
  • www.backreference.cojm
  • backreference.orgj
  • www.backreference.cmo
Show All Mistakes Hide All Mistakes