AWS S3 File Upload Problems


I have been trying my hardest to figure out how to get the AWS S3 file uploads to work. At this point, I am not sure what the issue is. On the S3 side, everything works if I make the bucket completely open to the public, which is a bad idea. When I put it behind an ACL and provided that to the Leantime instance, no uploads or reads work.

I am using the latest Leantime in a docker container. I have followed along in another thread (Issue uploading files) but have pretty much become stuck. When I disable the redirect on the ticket file upload template and var_dump, I receive the var_dumps but when I try to dump the exception, I get the following error in my resources error log:

[04-May-2020 17:52:13 America/Los_Angeles] PHP Fatal error: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 534790144 bytes) in /var/www/html/src/core/class.fileupload.php on line 553
[04-May-2020 17:52:14 America/Los_Angeles] PHP Fatal error: Allowed memory size of 1073741824 bytes exhausted (tried to allocate 534777856 bytes) in Unknown on line 0

The file is a 60k png, so obviously there is some sort of issue where it seems to continuously attempt to allocate memory? The other interesting bit is that there aren’t 533 lines of code in the file upload, so I am not sure what is happening there (I am not primarily a PHP developer).

Does anyone have any ideas? Was anyone able to get S3 file backing up and running? Any help would be appreciated. I will keep trying to poke at it, but it’s the one remaining issue preventing me from really being able to roll it out in replacement of JIRA and Asana at my company.

Going to break away, but I confirmed the file flows through just fine and ends up dying on the “uploadToS3” (I just noticed the spelling error). I threw this in right inside the try:

echo '<pre>'; var_dump($this->file_tmp_name); var_dump(stat($this->file_tmp_name)); echo '</pre>'; return; $file = fopen($this->file_tmp_name, "rb");

Notice the return before the fopen. If I have the return, I get a dump. The file exists, with the right size, everything looks fine. If I remove the return, I get no dump and a memory exhaustion error in the error logs. I really am at a loss on this and would love some guidance. I am sure I am doing something silly.

Hi & welcome to the forum!

One thing to note is that the other thread is referring to local file uploads. There are different methods for the upload to S3.
On the S3 permissions: The bucket does not need to be public however the IAM user should have sufficient ACLs to put objects as well as ACLs. Try to give that user AmazonS3FullAccess access for that bucket.

Given that you are able to upload files to S3 with a public bucket it is highly likely that there is an issue with the permissions of that user.
Make sure that you don’t just “Block Public Access” since that overrides access permissions for a specific user. Instead create a user with S3 access policies.


That’s a good point. If I turn off the default “Block” all public access, it doesn’t work, but if I turn them on, it works. I guess I assumed that if I didn’t leave them checked, all files would have public access. So I guess I need to dig in more on the AWS side. When I cleared out my logging in the docker container and rebuilt, the files get uploaded, so that’s good. Still weird it was getting a memory exhaustion trying to allocate 1GB of RAM, but it doesn’t do that without the dumps, so it must be something no user would ever actually hit. Thanks for your help!

The memory exhaustion is weird indeed. I am taking the action item to add more logging and better exception handling to the entire file upload stack.

On the S3 side: You can see in line 272 of /src/core/class.fileupload.php that each uploaded files is set to “authenticated-read”. Meaning only authenticated users (and bucket owners) can read those files.

A good way to check is to just upload a file (using Leantime) and then navigate to the bucket via the aws console. Open the file and click the link “Object URL”, that is the public URL of an object in S3. You shouldn’t be able to see the content using that link. The only way to access the file via AWS is to click the “Open” button or the “Download” button on the top right. That performs the authentication action and gives you access to the file.

Yeah, I noticed that, and indeed that is what the files are being uploaded as. Like I said, I haven’t done a ton of work with S3 myself, so I’m a bit new in the area. I did discover what I think to be a bug, that I will file over on GitHub. When I delete a file, such as from a Ticket or change a profile image, the old file remains on S3. I don’t have time to dive into the code tonight to see if I can find where the issue is, but after deleting all of the files from my tickets, they stay up on S3. Unless there is a way to clean out old files?

Please do file a bug and I will take a look

Hi Kevin

that may be a silly question, but did you checked your php.ini file?

file_uploads = On
upload_max_filesize = 128M
post_max_size = 130M

for example, if you wrote MB instead of M

good luck