Bypassing AWS IAM: How important it is to look closely at your policies

If you are dealing everyday with dozens of users in AWS and you like to have (or believe that you have) control over them; that you like to believe that you drive them like a good flock of sheep, you will feel my pain, and I’ll feel yours.

We manage multiple AWS accounts, for many purposes. Some accounts with more restrictions than others, we kinda control and deny to use some regions, some instance types, some services, etc. Just for security and budget control (like you do as well, probably).

That being said, you are now a “ninja” of AWS IAM because you have to add, remove, create, change, test and simulate easy and complex policies pretty much everyday, to make your flock trustfully follow its shepherd.

But dealing with users is great to test the strength of your policies. I have a policy where explicitly denied a list of instance types to be used (a black list with “ec2:RunInstances”). Ok, it denies to create them, but not to stop them, change instance type and start them again. You may end up feeling that your control is like this:

Let me show you all the technical details and a very self-explanatory demo in this video:

What do you think? Is it an expected behavior? It is actually. But I also think that the “ec2:ModifyInstanceAttribute” control should be more granular and should have “instanceType” somehow related to “ec2:RunInstances”. A limitation from AWS IAM, I guess.
In case you want to try by yourself, here you go below all commands I used (you will have to change the instance id, profile and region), if you want to copy a similar IAM policy, look at here in my blog post How to restrict by regions and instance types in AWS with IAM:
# create an allowed instance
aws ec2 run-instances --image-id ami-c58c1dd3 \
--count 1 --instance-type t2.large --key-name sec-poc \
--security-group-ids sg-12b5376a --subnet-id subnet-11fe4e49 \
--profile soleng --region us-east-1

# check status
aws ec2 describe-instances --instance-ids i-0152bc219d24c5f25 \
--query 'Reservations[*].Instances[*].[InstanceType,State]' \
--profile soleng --region us-east-1

# stop instance
aws ec2 stop-instances --instance-ids i-0152bc219d24c5f25 \
--profile soleng --region us-east-1

# check status
aws ec2 describe-instances --instance-ids i-0152bc219d24c5f25 \
--query 'Reservations[*].Instances[*].[InstanceType,State]' \
--profile soleng --region us-east-1

# change instance type
aws ec2 modify-instance-attribute --instance-id i-0152bc219d24c5f25 \
--instance-type "{\"Value\": \"i2.2xlarge\"}" \
--profile soleng --region us-east-1

# start instance type
aws ec2 start-instances --instance-ids i-0152bc219d24c5f25 \
--profile soleng --region us-east-1

# check status
aws ec2 describe-instances --instance-ids i-0152bc219d24c5f25 \
--query 'Reservations[*].Instances[*].[InstanceType,State]' \
--profile soleng --region us-east-1

# terminate instance
aws ec2 terminate-instances --instance-ids i-0152bc219d24c5f25 \
--profile soleng --region us-east-1

Cloud Forensics: CAINE7 on AWS

caine-7-accessories-481x460If you work with AWS, you may have to perform a forensics analisys at some point. As discussed in previous articles here, there are many tasks we can achieve in the cloud.
Here is a quick quide based on AWS-CLI on how to install, upload and use the well known CAINE7 distribution up in the Amazon Cloud importing it as an EC2 AMI:
  • First of all start CAINE7.iso as live CD in Virtualbox,  12GB of disk in VHD format will be fine ( if you don’t use VHD or you have VMDK instead you can convert it with “VBoxManage clonemedium CAINE7.vmdk  CAINE7.vhd –format vhd”)
  • Inside CAINE:
    • Run BlockON/OFF app from Desktop icon, select your virtual hard drive and make it Writable.
    • Go to Menu / System / Administration / gParted
    • In gParted  Device / Create Partition Table… msdos
    • Partition new create a 10GB partition and leave the rest empty
    • Create another partition linux-swap for the remaining 2GB
    • Edit – Apply all operations
    • Run Systemback (installer) form the Desktop icon.
    • System Install, fill the form with user full name: caine, system user: ec2-user, your password and hostname: caine. Then Next
    • Select the 10GB partition and set the mount point /
    • Click Next and the installation will start
  • Once the installation is finished you can stop the virtual machine, remove the liveCD, start it and log in to the VM again to do some additional steps inside your just installed CAINE7.
  • Update and upgrade:
    • sudo apt-get update; sudo apt-get upgrade
  • Install aws-cli:
    • sudo pip install aws-cli
  • Now we will install some dependences needed to get access via RDP once we run CAINE in AWS, just like if it is in our local workstation.
    • sudo apt-get install xrdp curl
    • sudo sed -i s/port=-1/port=ask-1/g /etc/xrdp/xrdp.ini
    • sudo sed -i s#/\.\ \/etc\/X11\/Xsession#mate-session#g /etc/xrdp/startwm.sh
    • sudo service xrdp restart
  • Extra: install the Amazon EC2 Simple Systems Manager (SSM) agent to process Run Command requests remotely and automated:
    • cd /tmp
    • curl https://amazon-ssm-<region>.s3.amazonaws.com/latest/debian_386/amazon-ssm-agent.deb -o amazon-ssm-agent.deb
    • dpkg -i amazon-ssm-agent.deb
  • Now we have to upload this VM VHD file to a S3 bucket, it will be around 8GBaws-logo1.png
    • aws s3 cp CAINE7.vhd  s3://your-forensics-tools-bucket/CAINE7.vhd
    • This will take time, depending on your bandwith.
  • If the AWS IAM user you are running doesn’t have proper permissions, you should review and follow these prerequisites http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html
  • Then we can import this virtual hard drive as AWS AMI. Firs create a json file like below to use it as parameter for the import task (caine7vm.json):
[{
    “Description”: “CAINE7”,
    “Format”: “vhd”,
    “UserBucket”: {
        “S3Bucket”: “your-forensics-tools-bucket”,
        “S3Key”: “CAINE7.vhd”
    }
}]
  • Lets perform the import:
    • aws ec2 import-image –description “CAINE7” –disk-containers file://caine7vm.json –profile default –region us-east-1
    • NOTE: you probably don’t need to specify profile or region.
  • The import taks may take some minutes, depending on how big is the VHD and how busy is AWS by that time. To check the status use this command:
    • aws ec2 describe-import-image-tasks –profile default –region us-east-1 –query ‘ImportImageTasks[].[ImportTaskId,StatusMessage,Progress]’
    • or this one with your custom “import-ami-XXXXX”
    • aws ec2 describe-import-image-tasks –profile default –region us-east-1 –query ‘ImportImageTasks[].[ImportTaskId,StatusMessage,Progress]’ –cli-input-json “{ \”ImportTaskIds\”: [\”import-ami-XXXXX\”]}”
    • You will see “StatusMessage”: “pending” –> “validated”–> “converting” –> “preparing to boot” –> “booted” –> “preparing ami” –> “completed”
  • Once it is completed, look for your brand new AMI id:
    • aws ec2 describe-images –owners self –profile default –region us-east-1 –filters “Name=name,Values=import-ami-XXXXX”
  • Good, we know the AMI id so let’s create a new instance inside an existing VPC and a Public Subnet (I use t2.medium with 2GB of RAM), please use your own Security Group with RDP and SSH open and your own ssh keyname:
    • aws ec2 run-instances –image-id ami-XXXX –count 1 –instance-type t2.medium –key-name YOURKEY –security-group-ids sg-YOURSG –subnet-id subnet-YOURPUBLIC –profile default –region us-east-1
  • Add it a tag for better identification:
    • aws ec2 create-tags –resources i-XXXX –tags Key=Name,Value=Investigator –profile default –region us-east-1
  • At this point you can attache a public IP to the instance and get access to it.
  • First allocate a public Elastic IP:
    • aws ec2 allocate-address –domain vpc-XXXX –profile default –region us-east-1
  • Then associate that new Elastic IP to our just launched CAINE7 instance (changeeipalloc-XXXX):
    • aws ec2 associate-address –instance-id i-XXXX –allocation-id eipalloc-XXXX –profile security –region us-east-1
  • Now open your favorite remote desktop application and access to your CAINE7, remember you will be asked for the username and password you set when CAINE was installed in your VirtualBox VM:

Screenshot 2016-06-13 23.09.56

  • Now you should be in!

Screenshot 2016-06-16 13.40.33

Alfresco Tuning Shortlist

During last few years, I have seen dozens of Alfresco installations in production without any kind of tuning. That makes me thing that 1) nobody cares about performance or 2) nobody cares  about documentation or 3) both of them!
I know people prefer to read a blog post instead the product official documentation. Since Alfresco have improved A LOT our official documentation and most of the information provided below can be found there, I want to point out some tips that EVERYONE has to take into account before going live with your Alfresco environment. Remember, it’s easy Tuning = Live, No Tuning = Dead.
Tuning the Alfresco side:
  • Increase number of concurrent connections to the DB in alfresco-global.properties

[bash]
# Number below has to be the maxThreads value + 75
db.pool.max=275
[/bash]

  • Increase number of threads that Tomcat will use in server.xml – section 8080, 8443 and 8009 in case you use AJP

[bash]
maxThreads=“200”
[/bash]

  • Adjust the amount of memory you want to assign to Alfresco in setenv.sh or ctl.sh (which is the default one):

[bash]
export CATALINA_OPTS=" -Xmx=16G -Xms=16G"
[/bash]

in JAVA_OPTS make sure you have the flag “-server” that gives 1/3 of memory for new objects, do not use “XX:NewSize=” unless you know what you are doing, Solr takes many new objects and it will need more than 1G in production.

[bash]
ooo.enabled=false
jodconverter.enabled=true
[/bash]

Tuning the Solr side:
In solrcore.properties for both workspace and archive Spaces Store

[bash]
alfresco.batch.count=2000
solr.filterCache.size=64
solr.filterCache.initialSize=64
solr.queryResultCache.size=1024
solr.queryResultCache.initialSize=1024
solr.documentCache.size=64
solr.documentCache.initialSize=64
solr.queryResultMaxDocsCached=2000
solr.authorityCache.size=64
solr.authorityCache.initialSize=64
solr.pathCache.size=64
solr.pathCache.initialSize=64
[/bash]

In solrconfig.xml for both workspace and archive Spaces Store 

[bash]
mergeFactor change it to 25
ramBufferSizeMB change it to 64
[/bash]

April/9/2015 Update! For Solr4 (Alfresco 5.x) add next options to its JVM startup options:

[bash]
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
[/bash]

Tuning the DB side:
Max allowed connections, adjust that value to the total amount of your Alfresco or Alfrescos plus 200, consider increase it in case you use that DB for other than only Alfresco.
  • For MySQL in my.cnf configuration file:

[bash]
innodb_buffer_pool_size = 4GB
max_connections=600
innodb_log_buffer_size=50331648
innodb_log_file_size=31457280
innodb_flush_neighbors=0
[/bash]

  • For Postgres in postgresql.conf configuration file

[bash]
max_connections = 600
[/bash]

Do maintenance on your DB often. Run ANALYZE or VACCUM (MySQL or Postgres), a DB also needs love!
Tuning the OS side:
I’m not very good on Windows so I will cover only a few tips for Linux:
  • Change limits in /etc/security/limits.conf to the user who is running your app server, for example “tomcat”:

[bash]
tomcat soft nofile 4096
tomcat hard nofile 65535
[/bash]

If you start Alfresco with a su -c option in /etc/init.d/, for Ubuntu you have to uncomment the pam_limits.so line here /etc/pam.d/su, if this is using login (by ssh) it is uncommented by default. For RedHat/Centos this line has to be uncommented here /etc/pam.d/system-auth.

  • Your storage throughput should be greater than 200 MB/sec and this can be checked by:

[bash]
# hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 390 MB in  3.00 seconds = 129.85 MB/sec
[/bash]

  • Allow more concurrent requests by editing /etc/sysctl.conf

[bash]
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.ip_local_port_range = 2048 64512
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 10
[/bash]

Run “sysctl -p” in order to reload changes.
  • A server full reboot is a good preventive measure before going live, it should start all needed services in case of contingency and we will find if we left something back on the configuration.
Remember, this is ONLY A SHORTLIST, you can do much more depending on your use case. Reading the documentation and taking our official training will be helpful and take advantege that we were polishing our training materials lately.

Alfresco security check list

Alfresco security check list is a list of elements to check before going live with an Alfresco installation in a production environment. This check list is part of the Alfresco Security Best Practices Guide, but I wanted to give it a post in case you missed (thinks that happen due to the 30+ pages of the guide).

[slideshare id=43292696&doc=alfrescosecuritybestpractices-checklistonly-150107130033-conversion-gate02&type=d]

 

Integration of IFTTT with Alfresco

If you are not aware about what IFTTT is, I recommend you to take a look in to this https://ifttt.com/wtf and then come back here to continue reading this blog post.

Here a brief demo about this integration, more details and configuration steps below.

Once you know what “if THIS then THAT” is, I want to explain how I have made a seamless integration with Alfresco using some very straightforward receipts and sending information to Alfresco in the THAT (action) part of its receipt.

Since there is not an Alfresco channel in IFTTT (yet), the data flow is from almost any channel to Alfresco using “Send an email from GMAIL” to Alfresco inbound email service (to a folder). I mean, this article is about how to send multiple kind of data from several IFTTT channels to Alfresco through the inbound email feature built in Alfresco.

In this screenshot you can see a self explained example:

Screen Shot 2014-06-09 at 11.50.25 AM

When I liked a picture in Instagram, it will be sent to Alfresco, once in Alfresco, we have a world of possibilities like transformations, workflows, publication, alerts, etc.

What do we need for having this working? Here you go a list of steps to get this ready to go:

1- Enable your Inbound Email service in Alfresco:
For Alfresco One 4.2 this is very easy by using the new Admin Console http://localhost:8080/alfresco/service/enterprise/admin/admin-inboundemail. Explanation below.
For Alfresco Community refer to here http://docs.alfresco.com/community/concepts/community-videos-12.html and here http://docs.alfresco.com/community/concepts/email-inboundsmtp-props.html

Screen Shot 2014-06-09 at 12.22.54 PM
As you can see in the screenshot above, I have made some changes to allow only emails from @blyx.com and from @alfresco.com, any one inside Alfresco and member of the EVERYONE group can send emails to a folder with an email alias aspect. My server is running in Linux and with a non-root user this is the reason I set port 1025, I have a port redirect to listen on port 25 from the internet. Examples of port redirect here http://docs.alfresco.com/community/tasks/fileserv-CIFS-useracc.html.

In the example I have created a folder called “Drafts” with the aspect Aliasable (Email):

Screen Shot 2014-06-09 at 12.16.49 PM

Edit this folder properties and add a new value for Alias property, in my case drafts which will be the email address alias of this folder, like [email protected] (alias + @ + server FQDN). I don’t have to create a MX DNS record because I’m using the FQDN.

Screen Shot 2014-06-09 at 12.18.45 PM

Now, I’m ready to send an email from an existing Alfresco user  (and with permissions to create content) to Alfresco, in my case [email protected] is the user toni in Alfresco.

2- Create an IFTTT receipt like showed in the video above.

3- Enjoy thousands of ways to add contents to your Alfresco!

Alfresco Tip: How to enable SSL in Alfresco SharePoint Protocol

There are two ways to approach getting the Alfresco SharePoint Protocol to run over SSL and avoid having to modify the Windows registry for allow non-ssl connections from MS Office (in both Windows and Mac).

One way is to use the out of the box SSL certificate that Alfresco uses for communications between itself and Solr (this blog post is about this option). The other is to generate a new certificate and configure Alfresco to use it, which is the option if you want to use a custom certificate. Next steps tested on Alfresco 4.2, it should work in 4.2 as well for both Enterprise and Community. Please, let me know through a comment if you have an objection on this.

  • 1. Rename file tomcat/shared/classes/alfresco/extension/vti-custom-context.xml.ssl to tomcat/shared/classes/alfresco/extension/vti-custom-context.xml, if it does not exist just create it like below:

[xml]

<?xml version=’1.0′ encoding=’UTF-8′?>
<!DOCTYPE beans PUBLIC ‘-//SPRING//DTD BEAN//EN’ ‘http://www.springframework.org/dtd/spring-beans.dtd’>

<beans>
<!–
<bean id="vtiServerConnector" class="org.mortbay.jetty.bio.SocketConnector">
<property name="port">
<value>${vti.server.port}</value>
</property>
<property name="headerBufferSize">
<value>32768</value>
</property>
</bean>
–>

<!– Use this Connector instead for SSL communications –>
<!– You will need to set the location of the KeyStore holding your –>
<!– server certificate, along with the KeyStore password –>
<!– You should also update the vti.server.protocol property to https –>
<bean id="vtiServerConnector" class="org.mortbay.jetty.security.SslSocketConnector">
<property name="port">
<value>${vti.server.port}</value>
</property>
<property name="headerBufferSize">
<value>32768</value>
</property>
<property name="maxIdleTime">
<value>30000</value>
</property>
<property name="keystore">
<value>${vti.server.ssl.keystore}</value>
</property>
<property name="keyPassword">
<value>${vti.server.ssl.password}</value>
</property>
<property name="password">
<value>${vti.server.ssl.password}</value>
</property>
<property name="keystoreType">
<value>JCEKS</value>
</property>
</bean>
</beans>

[/xml]

  • 2. Now add the required attributes to alfresco-global.properties:

[bash]

vti.server.port=7070
vti.server.protocol=https
vti.server.ssl.keystore=/opt/alfresco/alf_data/keystore/ssl.keystore
vti.server.ssl.password=kT9X6oe68t
vti.server.url.path.prefix=/alfresco
vti.server.external.host=localhost
vti.server.external.port=7070
vti.server.external.protocol=https
vti.server.external.contextPath=/alfresco

[/bash]

Remember to change localhost to your server full name (i.e. your-server-name.domain.com).

  • 3. Restart the Alfresco application server and try the “Edit online” action on a MS Office document through Alfresco Share. A warning message will appear to accept the Alfresco self-signed certificate but is a common behavior.

Alfresco Tip: Unattended installation with one command

This tip is valid for Linux and Windows and should be for Enterprise and Community as well. I have tried with last Enterprise build 4.2.0.3 on Ubuntu.

How to do an unattended installation of Alfresco with MySQL support with just one command, is as easy as running the command below (all in one line):

[bash]

sudo ./alfresco-enterprise-4.2.0.3-installer-linux-x64.bin –prefix /opt/alfresco \

–unattendedmodeui none –mode unattended –debuglevel 0 \

–enable-components javaalfresco,alfrescosharepoint,alfrescogoogledocs,libreofficecomponent \
–disable-components postgres \
–jdbc_url "jdbc:mysql://localhost/dbname?useUnicode=yes&characterEncoding=UTF-8" \
–jdbc_driver org.gjt.mm.mysql.Driver –jdbc_database dbname –jdbc_username dbuser \
–jdbc_password dbpassword –alfresco_ftp_port 2121 \
–alfresco_admin_password alfrescoadminpassword –baseunixservice_install_as_service 0 \
–alfrescocustomstack_services_startup demand

[/bash]

Change “dbname”, “dbuser”,”dbpassword”, “alfrescoadminpassword” with yours.

MySQL Note: In the example above I’m using MySQL, in this case you must have the DB already installed and when the command ends, copy the MySQL JDBC connector (mysql-connector-java-5.1.18-bin.jar) into the tomcat/lib directory.

Posgresql Note: If you want to install Posgresql it will be installed automatically using the installer but the command should be like this:

[bash]

sudo ./alfresco-enterprise-4.2.0.3-installer-linux-x64.bin –prefix /opt/alfresco \
–unattendedmodeui none –mode unattended –debuglevel 0 \
–enable-components javaalfresco,postgres,alfrescosharepoint,alfrescogoogledocs,libreofficecomponent \
–jdbc_url "jdbc:postgresql://localhost/dbname?useUnicode=yes&characterEncoding=UTF-8" \
–jdbc_driver org.postgresql.Driver –jdbc_database dbname –jdbc_username dbuser \
–jdbc_password dbpassword –alfresco_ftp_port 2121 \
–alfresco_admin_password alfrescoadminpassword –baseunixservice_install_as_service 0 \
–alfrescocustomstack_services_startup demand

[/bash]

In case of Postgresql none library has to be copied to tomcat/bin because is done by the installer.

Remember that it takes 1 or 2 minutes to finish the unattended installation, be patient.

More information and options? “–help” is your friend

./alfresco-enterprise-4.2.0.3-installer-linux-x64.bin –help

Alfresco Tip: add more OpenOffice or LibreOffice processes instances to JodConverter

Do you have a bottle neck on your transformations to PDF or any other format done by Libre or OpenOffice inside Alfresco?

This tip is thanks to a conversation with my colleague at Alfresco Antonio Soler. Due to the Alfresco Enterprise support for JodConverter this tip is not valid for the Community version.

Thanks to the JodConverter multiples LibreOffice or OpenOffice instances can be invoked to manage more transactions if needed. For example, one process can handle up to 200 transformations and then it is automatically restarted, if you need to manage more than this and add parallel processes  just add more ports comma separated in the JodConverter port option as seen below:

Screen Shot 2014-02-17 at 11.13.15 AM

After apply this change you can see three soffice processes:

Screen Shot 2014-02-17 at 11.12.52 AM

Remember, if you are using OpenOffice you will see “soffice.bin” process and “.soffice.bin” for LibreOffice.

If you want to know more about the new Admin Panel visit this blog post: http://blogs.alfresco.com/wp/kevinr/2013/09/30/alfresco-repository-admin-console/

Alfresco Tip: got the control and customize your logs (alfresco.log, share.log and solr.log)

Are you wondering about how to have full control on the Alfresco logs? If you are an Alfresco administrator I’m pretty sure you want to manage where the alfresco.log, share.log and solr.log are placed, right?

I asume you want to store all your alfresco logs in /opt/alfresco/tomcat/logs, which is the default logging directory for Tomcat and where you can find catalina.out log file as many other out-of-the-box logging files for this well known application server.

If you use the Alfresco installer or a default installation, logging files like alfresco.log, share.log and solr.log may be created where you run the “alfresco.sh start” script or where you start Tomcat. For example, in an installation placed in /opt/alfresco/, when you start Alfresco with ./alfresco.sh start (once you are in /opt/alfresco) those 3 files will be created in /opt/alfresco. If you are using the initd start/stop script for RedHat or Ubuntu you will see log files created in the root “/“ directory or maybe in the user home directory (it may depends).

Here you go how to manage all of these :
(Disclaimer: remember that after doing all said here, Alfresco will still logging some exception before override of the extension files take place).

  • Alfresco repository logs:

Valid for for any Alfresco version. Copy the original log4j properties from the alfresco deployed war file to the extension directory renamed as custom-log4j.properties:

[bash]

cp /opt/alfresco/tomcat/webapps/alfresco/WEB-INF/classes/log4j.properties /opt/alfresco/tomcat/shared/classes/alfresco/extension/custom-log4j.properties

[/bash]

Edit the custom-log4j.properties file and modify “log4j.appender.File.File” as your needs or like here:

[bash]
###### File appender definition #######
log4j.appender.File=org.apache.log4j.DailyRollingFileAppender
log4j.appender.File.File=/opt/alfresco/tomcat/logs/alfresco.log
log4j.appender.File.Append=true
log4j.appender.File.DatePattern=’.’yyyy-MM-dd
log4j.appender.File.layout=org.apache.log4j.PatternLayout
log4j.appender.File.layout.ConversionPattern=%d{ABSOLUTE} %-5p [%c] %m%n
[/bash]

  • Alfresco Share logs:

At the moment there is no extension mechanism for Share logs, then we can not do it in the same way as for Alfresco repository. In this case you only can edit /opt/alfresco/tomcat/webapps/share/WEB-INF/classes/log4j.properties file and modify appender line as shown below:

[bash]
log4j.appender.File.File=/opt/alfresco/tomcat/logs/share.log
[/bash]

The bad news with this method is that you will need to do it again when you upgrade Alfresco Share or redeploy share.war again.

  • Solr logs:

In Alfresco 4.2 (for previous versions see below): alf_data/solr/log4j-solr.properties you will find the configuration file, now change the line “log4j.appender.File.File” like below:

[bash]
# Set root logger level to error
log4j.rootLogger=WARN, Console, File

###### Console appender definition #######

# All outputs currently set to be a ConsoleAppender.
log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.PatternLayout

log4j.appender.Console.layout.ConversionPattern=%d{ISO8601} %x %-5p [%c{3}] [%t] %m%n

###### File appender definition #######
log4j.appender.File=org.apache.log4j.DailyRollingFileAppender
log4j.appender.File.File=/opt/alfresco/tomcat/logs/solr.log
log4j.appender.File.Append=true
log4j.appender.File.DatePattern=’.’yyyy-MM-dd
log4j.appender.File.layout=org.apache.log4j.PatternLayout
log4j.appender.File.layout.ConversionPattern=%d{ABSOLUTE} %-5p [%c] %m%n

###### added Alfresco SOLR class logging #######
log4j.logger.org.alfresco.repo.search.impl.solr=INFO
log4j.logger.org.alfresco.solr.tracker.CoreTracker=ERROR
[/bash]

In previous Alfresco versions just bear in mind to copy the log file into the “solr/home” value defined in “{tomcat}/conf/Catalina/{hostname}/solr.xml”. You also may need to reload the Solr log4j resource from the Solr admin panel: https://localhost:8443/solr/admin/cores?action=LOG4J&resource=log4j-solr.properties
Also remember to use  https://localhost:8443/solr/alfresco/admin/logging to manage your Solr logs.

More info about Solr logs here: http://wiki.alfresco.com/wiki/Alfresco_And_SOLR#Load_Log4J_Settings and here https://issues.alfresco.com/jira/browse/MNT-5803

  • Last step for any configuration about the logs configuration is to restart your application server.

If you want to see and manage the logging with a web tool, see the Alfresco Support Tools in action (for Alfresco Enterprise only) here: https://addons.alfresco.com/addons/support-tools-admin-console. This is just an example about the logging section:

Screen Shot 2014-02-17 at 12.28.26 PM

If you want to know more about the new Admin Panel visit this blog post: http://blogs.alfresco.com/wp/kevinr/2013/09/30/alfresco-repository-admin-console/

UPDATE! Feb 20th

As Cesar mentioned in the comments, the easiest way to have control about where your logs are located just add this line to your init.d script (take care about these variables):

[bash]

su -c $ALF_USER “cd $ALF_LOGS && $ALF_HOME/alfresco.sh start”

[/bash]

Or even if you are using the alfresco.sh script directly add next lines just before the “ERROR=0” line:

[bash]

LOGSDIR=/opt/alfresco/tomcat/logs

cd $LOGSDIR

[/bash]