It _shouldn't_ mess up anything existing, I don't think. It's been a few months since I did a ton with JS so might be forgetting something obvious here, but if anyone already has Aphlict up and running, I'm pretty sure their existing install will be unimpacted by the presence/absense of package-lock.json. If they want to manually update their npm packages, then they might need the additional steps, but pretty sure it won't be disruptive outside of that.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Jun 18 2021
I was thinking about having it version controlled and I do think that would be a good idea at some point. If we do that now I think that might mess up installations which happen to be running different versions of ws, or the upgrade path would require some additional steps. I think it would be something like
- Run npm uninstall
- Delete package-lock.json
- Upgrade
- Run npm ci which should follow the package-lock.json definitions
Do we actually want to be version controlling this? That's the recommended approach for Node projects, and given how hilariously awful dependency management is with npm, it might simplify support if we say "Aphlict runs with this specific version of websockets and its dependencies".
Refs the discussion here T15011#386 and Aphlict
Update comment
Ah I wasn't aware of that option. I created D25004: Update .gitignore to account for package-lock.json if we want to update the .gitignore
In T15011#390, @speck wrote:
The documentation for installing Aphlict instructs you to npm install ws in the support/aphlict/server/ folder but it looks since that documentation was written newer versions of node/npm will write out package-lock.json which the repository is not setup to ignore. We'll need to add that file to the .gitignore file I think.
This and D25001: T15006: Update .arcconfig to point to we.phorge.it are duplicates. I tried to land it this morning but ran into issues with the land process that I didn't have time to work out
In T15011#386, @Ekubischta wrote:A few things @willson556
- Untracked file in phorge source support/aphlict/server/package-lock.json
This and D25001: T15006: Update .arcconfig to point to we.phorge.it are duplicates. I tried to land it this morning but ran into issues with the land process that I didn't have time to work out
A few things @willson556
In T15011#370, @willson556 wrote:I actually started on a VSCode Devcontainer based solution on my GitHub: https://github.com/willson556/phorge-devcontainer
It is working pretty well with notifications and repository hosting both configured out of the box. My only concern with the config at the moment is that it's very much setup for development -- we would want to clearly document that it is not to be used as a starting point for a production docker-compose setup!
Any feedback would be appreciated!
I actually started on a VSCode Devcontainer based solution on my GitHub: https://github.com/willson556/phorge-devcontainer
In T15011#363, @speck wrote:We should consider a Vagrantfile in place of docker containers. I think it will be more approachable to newcomers having a single VM with all the services/configurations setup compared to managing multiple containers.
Separately, developing on Windows has its own complications
Something was funky in how the repo was originally imported that was causing the issues. Somehow got to a state where it wasn't properly a bare repo (there wasn't a working tree, but everything was still inside .git/ instead of the root folder). Not sure how that happened, but seems to be resolved now
I have a plan for a single docker container for developing extensions.
It's really a question of "what will people like", so maybe throw everything at the wall and see what sticks.
We should consider a Vagrantfile in place of docker containers. I think it will be more approachable to newcomers having a single VM with all the services/configurations setup compared to managing multiple containers.
I think the plan for this is going to be
- Try to address all external-facing "Phabricator"s
- Submit this patch upstream on secure.phabricator.com
- Phorge pulls in this change from upstream
- we.phorge.it works fine in Chrome, but arc has some issues w.r.t. CURLE_SSL_CACERT; I expect it might solve itself after a restart/update of my local machine.
- git fetch from the new uri shows no errors
- the push dragon still thinks rP51cb7a3db9 to 2abd75c162 is not fast-forward.
Renaming getDefaultProjectName() to getDefaultWordmark()
Infrastructure setup is being documented in server
Okay I think everything is setup for the migration to we.phorge.it
- I added a port 80 configuration for we.phorge.it to nginx
- I ran certbot to grab a cert for we.phorge.it, I used --nginx
- I updated the nginx conf file to clean up the automatic modifications and setup secure.phorge.it and secure.phorge.dev to redirect to we.phorge.it
- I updated phabricator.base-uri to use we.phorge.it
- I updated notification.servers to use we.phorge.it
- I restarted nginx
Emails have a bunch of X-Phabricator-* headers, for configuring rules in mail clients.
- We may want to allow installs to keep it as Phabricator for compatibility
Okay I'm going to try swapping out the URL for we.phorge.it. If everything goes well everyone will need to update their URLs and clone repos. If things don't go well I'll, uh, glue it back together
Notifications are also functional. Took me a minute to remember where the "test notification" feature is located (it's in your user settings > notifications)
Whoops, commented on the wrong task, tested imagemagick in T15006#314
I'm going to get aphlict up and running before looking at changing the domain name stuff. Not having notifications is kind of a bummer.
Jun 17 2021
(I verified by starting a new ssh session over port 2222 and freshly cloning phorge after modifying diffusion.ssh-port)
The ports are switched
- Administrative port is now 2222
- VCS port is now 22
Hah yup, we're all good in case everything catches fire. I'm around all evening and can revert changes if anything goes haywire
@chris I'm looking to make the SSH configuration change shortly, having the administrative ssh go over port 2222 and vcs go over port 222. In the event everything goes horribly wrong does someone have physical access to this machine or some other control mechanism?
For the time being I've modified the wordmark configuration to manually upload the logo file https://secure.phorge.it/config/edit/ui.logo/
@avivey the current installation includes a commit I had made on the github fork which made minimal changes to rebranding. Ultimately I think we'll want to scrap that commit but it should have replaced the eye icon with a lovely heart.
I have some step by notes in our internal instance for getting SSH going - If you get stuck let me know and I will parse them into a public readable format
Thanks @speck! I think we also need to update the NGINX config and phabricator.base-uri config to we.phorge.it from secure. Will also require updating the clone URI. You want to just bundle both changes at once to make things easier? Looks like @deadalnix already updated DNS so that should be hunky dory
- move administrative SSH to port 2222
This one is going to require that everyone who currently has a cloned repo to update it, correct? I'll take a look later tonight at swapping this out, as the sooner the better IMO. I'll comment here before making the change.
I created Release Process for the release process.
Maybe create ssh.log file and chown it to git? and hope that's the only file it needs to write to?
I'm guessing from the name that it's only used by the SSH flow.
In T15000#289, @avivey wrote:https://secure.phabricator.com/book/phabricator/article/diffusion_hosting/
I think /var/repo should be owned by git:
The user the daemons run as. We'll call this daemon-user. This user is the only user which will interact with the repositories directly. Other accounts will sudo to this account in order to perform repository operations.
The release strategy of Phabricator was:
- everything goes into master asap, unless it's dangerous
- once a week, master gets merged into stable
- after that, all the "dangerous" stuff lands to master
- important stuff that comes up during the week gets cherry-picked to stable.
Yeah, logging perms should (I think) be fixed now. I was dumb when I chowned things and forgot what system users needed what access.
I think /var/repo should be owned by git:
I think secure. had instructions about file ownership - looking...
Do we have a documented release strategy? I'm not very familiar with git and I only have a vague sense of what Phabricator's release process was. I think it's something like
- Accepted changes are landed into master
- Evan cherry-picks changes from master into stable to "release"
Possibly with some additional smoke-testing somewhere in all this?
I think there might be some permissions issues with the log location but I'm not sure if it's the root cause of the issue being seen here.
That one is totally my fault - 4042d24d74 is a local commit I have (updates .arcconfig). But I was trying the push from a different commit, which has 51cb7a3db9 as its (only) parent.
Same with a patch workflow against a fresh clone of the repo:
phorge (master)$ arc --config phabricator.uri=https://secure.phorge.it patch D25000 INFO Base commit is not in local repository; trying to fetch. Created and checked out branch arcpatch-D25000.
git version 2.32.0.rc3 locally; 2.25.1 on the server. Both reasonably recent...
In T15000#277, @avivey wrote:Also won't let me push, because something thinks it's a non-fast-forward (it is, unless I'm drunk):