2

After using git for version control purely on my remote server, I am now looking to use git for version control across both my remote and local file system.

My approach to doing this so far is to:

  1. Create a remote bare repo as a 'save' directory:
  • Create the directory mkdir /save
  • Create a save repo for this project mkdir /save/projectName
  • Enter the project repo cd /save/projectName and initialise it as bare git init --bare
  1. Clone the remote save repo locally, add, edit and commit, then push back to remote save:
  • Create a local development directory mkdir /webDev and enter it cd /webDev
  • Clone the remote save of this project `git clone user@host:/save/projectName
  • Add files then run git add *, edit them and then commit `git commit * -m "Update."
  • Push changes to the remote save repo with git push origin master
  1. Clone the updated save repo into a development repo on the remote machine:
  • Enter the server directory /srv, and the development sub-directory /srv/dev
  • Clone the remote saved repo with git clone /save/projectName
  1. Check the development site works as expected, then repeat (3) for the production directory:
  • Run git clone /save/projectName in the production directory /srv

This all works fine, however my concern is the memory taken up by having 3 directories with the same contents, on top of which, for multiple projects, will increase as 3*N for project number N.

I've read many online tutorials and sites about using git, however I haven't been able to follow any clearly. There is often talk about working with branches but I don't want to think about branches yet - just cloning, pushing and pulling.

I've thought I would ideally like to have the bare /save repo on my local machine with a dynamic IP, and then somehow copy the contents to the remote machines development and production directories. This would reduce 3 directories per project to 2, which would be better, but I haven't found a way to conveniently git clone from a dynamic IP address.

In summary, there's a few questions I can think of that will address the issue I have:

  1. Do cloned git repositories occupy the same memory space as their raw file equivalents ? Or does git somehow make the memory size more succinct ?

  2. Is there an industry standard way of setting up the process of local, remote dev and remote prod locations, that gets around the memory issue ?

  3. Is there a means of hosting the bare git repo on my local machine, and then somehow moving them to the remote dev and prod locations ?

Any direction with the above questions, or possible misconceptions I may have, along with an explanation, would be appreciated.

3
  • 1
    "Do cloned git repositories occupy the same memory space as their raw file equivalents ? Or does git somehow make the memory size more succinct ?" Git repositories occupy more space than the file hierarchy they represent, in part because every committed state of every file takes up space, and in part because there's more to a Git repo than file contents. — But really, if you're worried about that, you're doing something wrong. Git repos are not generally big (and many public hosting sites will reject large repos or repos containing large blobs).
    – matt
    Commented Sep 8, 2023 at 16:12
  • "Is there an industry standard way of setting up the process of local, remote dev and remote prod locations, that gets around the memory issue ?" Local and remote are two different places, so there is nothing to standardize. Dev vs prod is usually a branch distinction.
    – matt
    Commented Sep 8, 2023 at 16:13
  • "Is there a means of hosting the bare git repo on my local machine, and then somehow moving them to the remote dev and prod locations" Sure, if that's what you want to do.
    – matt
    Commented Sep 8, 2023 at 16:14

0